As AI becomes more powerful, the infrastructure needed to run it will reach its limits, and these limits may open the door to decentralized physical infrastructure networks (He depends), He said Trevor Harris JonesDirector at Render Network.
Talk to The Street Round table Hosts Jackson HinkleDecentralized GPU networks are not intended to replace traditional data centers, but rather to complement them by solving some of AI’s most pressing scaling challenges, Harris-Jones said.
Related: DePIN Explained: What is a Decentralized Physical Infrastructure Network?
Harris-Jones says DePIN is not intended to replace centralized infrastructure
In simple words, DePIN allows people around the world to share real-world network infrastructure in exchange for rewards, so that there is no dependence on or control of a central company.
One such project is Render Network. It is actually a decentralized GPU rendering platform designed to democratize the digital creation process and free creators from the clutches of centralized entities.
Hinkle pointed to recent examples from the world of centralized AI, including OpenAI’s release of a video generation application called Sora, where usage had to be capped due to GPU limitations.
He wondered whether decentralized models could eventually outperform centralized data centers.
Harris-Jones backed away from the idea of outright replacement.
“I don’t think it’s about replacement.” He said. “I actually think it’s about taking advantage of both.”
CPU clusters remain critical for training large AI models, which take advantage of large memory pools and tightly integrated hardware. But he noted that training is only a small part of the total computational workload in AI.
Harris-Jones explained that inference — running AI models — represents roughly 80% of a GPU’s work.
This distinction is where decentralized networks like Render come into play. While early versions of AI models were resource-intensive, Harris-Jones said they quickly became more efficient as engineers improved and compressed them.
Over time, models that previously required massive infrastructure can be run on much simpler devices such as smartphones, he added.
“So we tend to see this on all the models that come out,” He said. “It started to become very heavy and unrefined, and within a very short period of time, it was optimized so that it could run on simple, decentralized hardware.”
From a cost perspective, this shift makes decentralized GPU networks increasingly attractive, Harris-Jones said.
He suggested that instead of relying solely on expensive, sophisticated data centers, inference workloads could be distributed across idle GPUs around the world.
“It will be cheaper to run on decentralized idle consumer nodes than on centralized nodes.”
More news:
Harris Jones is bullish on the DePIN sector
Harries-Jones formulated DePINs as a way to alleviate growing AI bottlenecks across computing and energy infrastructure.
He explained that when centralized power systems face stress, decentralized computing offers a parallel solution by tapping into globally untapped resources.
“So I’m very optimistic about the sector as a whole.”
Harris Jones confirmed that global demand for graphics processing units far exceeds supply. “There are not enough GPUs in the world today,” he said.
So, the key is to take advantage of all the idle GPUs, not fight for the under-supplied high-end GPUs, he suggested.
According to Harris-Jones, the future of AI infrastructure is not centralized networks or DePIN. Instead, it is a flexible use of both to meet the massive demand for AI.
This story was originally published by The Street On January 19, 2026, he first appeared in innovation to divide. Add TheStreet as Favorite source by clicking here.




