Want to connect with Io.Net?
Join organizations building the agentic web. Get introductions, share updates, and shape the future of .agent.
Is this your company?
Claim this profile to update your info, add products, and connect with the community.
io.net is a critical infrastructure provider for the AI agent ecosystem because it lowers the cost of the underlying compute required to run agents at scale. Through its 'io.intelligence' product, the company provides a unified API for running open-source models and deploying custom agent workflows. This allows developers to move beyond simple API wrappers for closed-source models and instead host their own autonomous agents on dedicated, affordable GPU clusters.
In the broader agent stack, io.net sits at the physical and orchestration layer. As agents become more complex—performing long-running tasks, participating in multi-agent simulations, or handling continuous inference—the compute costs on traditional clouds become prohibitive. io.net’s ability to provide 'burst capacity' and decentralized clusters makes it a viable foundation for builders who need to scale agentic populations without the overhead of centralized cloud margins.
io.net addresses the primary bottleneck of the modern AI industry: the scarcity and cost of high-performance GPUs. The company operates a Decentralized Physical Infrastructure Network (DePIN) that aggregates compute power from a global network of independent data centers, cryptocurrency mining operations, and consumer-grade hardware. By pooling these underutilized resources, io.net creates a marketplace where AI developers can rent clusters of GPUs for training and inference at prices significantly lower than those offered by hyperscale cloud providers.
At the core of the platform is the ability to form distributed clusters across geographically dispersed locations. This is technically non-trivial due to latency issues, but io.net employs the Ray framework—an open-source unified compute framework—to manage and scale these workloads. This allows users to deploy clusters in minutes without the long lead times or enterprise contract negotiations typical of centralized services. The platform supports a variety of hardware, ranging from high-end NVIDIA H100s and H200s to more cost-effective consumer cards like the RTX 4090, which the company claims can reduce inference costs by up to 75% for large language models.
The financial and operational orchestration of io.net is built on the Solana blockchain. This choice provides the throughput necessary for managing thousands of individual GPU providers and handling micro-payments in real-time. The network uses its own tokenomics model to incentivize providers to remain online and maintain high performance. This decentralized approach is designed to eliminate single points of failure and middleman markups. For the end-user, the blockchain backend is largely abstracted away, presenting instead a familiar interface for spinning up Virtual Machines, Ray clusters, or containerized workloads.
Founded in 2022, io.net emerged from the need for affordable compute during the rapid expansion of generative AI. The company was founded by Ahmad Shadid, who initially led as CEO, alongside co-founders Basem Oubah and Saad Mohammed Alenezi. The leadership team has since expanded to include veterans from major tech and finance firms like Amazon, Binance, and Kraken. In 2024, the company raised $30 million in a Series A funding round led by Hack VC, with participation from Multicoin Capital, Delphi Digital, and Solana Ventures. This capital has been used to expand the network to over 30,000 GPUs.
Adoption of io.net is concentrated among AI startups and decentralized applications that require massive scale but lack the capital for heavy AWS commitments. One notable success story is Leonardo.Ai, which reportedly used io.net to scale from 14,000 to 19 million users while cutting GPU infrastructure costs by over 50%. The platform is also increasingly used for specialized tasks like fine-tuning open-source models and running large-scale evaluation benchmarks. As the AI ecosystem moves toward multi-agent systems and persistent autonomous agents, io.net is positioning its 'io.intelligence' toolkit to serve as the default infrastructure for running these compute-intensive agentic swarms.
On-demand access to high-performance GPU clusters.
API toolkit for running open-source models and deploying agents.
FastAPI remote attestation service for Intel TDX and NVIDIA H200 confidential VMs
Doc repo to use with Mintlify
a Ray Serve Chat Demo Serving Hugging Face Models
Public repo
Io.Net is hiring
You've explored Io.Net.
Join organizations building the agentic web.