A new ecosystem is helping telcos and distributed cloud providers redefine their role in the AI value chain. At NVIDIA GTC 2026, leading operators announced AI grids — geographically-distributed and interconnected AI infrastructure — using their network footprint to power and monetise new AI services across the distributed edge.
![]() |
| Source: NVIDIA. Concept visual for an AI grid. |
Telcos and distributed cloud providers run some of the most expansive infrastructure in the world, NVIDIA said: about 100,000 distributed network data centres worldwide, spanning regional hubs, mobile switching offices and central offices, with enough spare power to offer more than 100 gigawatts of new AI capacity over time.
AI grids turn this existing real-estate, power and connectivity into a geographically-distributed computing platform that runs AI inference closer to users, devices and data.
Different operators are taking different paths, NVIDIA said. Many are starting by lighting up existing wired edge sites as AI grids they can monetise today, while others are harnessing AI-RAN — a technology that enables the full integration of AI into the radio access network (RAN) — as a workload and edge inference platform on the same grid.
Akamai is building a globally-distributed AI grid, expanding Akamai Inference Cloud across more than 4,400 edge locations with thousands of NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. Akamai’s AI grid orchestration platform matches each request to the right tier of compute, improving the token economics of inference while powering low-latency, real-time AI experiences for applications like gaming, media, financial services and retail.
Indosat Ooredoo Hutchison (IOH) is connecting its sovereign AI factory with distributed edge and AI‑RAN sites across Indonesia to build an AI grid for local innovation. By running Sahabat-AI — a Bahasa Indonesia-based platform — on this grid within Indonesia’s borders, IOH brings localised AI services closer to hundred millions of Indonesians across thousands of islands.
AI grids are facilitating a new class of AI‑native applications that are real‑time, hyperpersonalised, concurrent and token-intensive, NVIDIA said.
- Personal AI is using NVIDIA Riva to power human‑grade conversational agents on the AI grid.
- Linker Vision is transforming city operations by running real‑time vision AI on the AI grid, enabling up to 10x faster traffic accident detection, 15x faster disaster response and sub‑minute alerts for unsafe crowd behaviour.
- Decart is redefining hyperpersonalised distributed media by bringing real‑time video generation to AI grids.
A growing ecosystem of full‑stack partners, including Cisco and infrastructure partners like HPE, are bringing AI grid solutions to market on systems built with the NVIDIA RTX PRO 6000 Blackwell Server Edition. Armada, Rafay and Spectro Cloud are among the partners building an AI grid control plane to seamlessly orchestrate workloads across distributed AI infrastructure.
“Physical AI is accelerating the shift from centralised intelligence to distributed decision making at the network edge,” said Masum Mir, Senior VP and GM, provider mobility at Cisco.
“Our partnership with NVIDIA brings together the full stack — from NVIDIA GPUs to Cisco’s networking and mobility capabilities — enabling operators to power mission-critical applications, deliver real-time inferencing and participate in the AI value chain.”
Explore
The NVIDIA AI Grid Reference Design defines the building blocks — including NVIDIA accelerated computing, networking and software platforms — for deploying and orchestrating AI across distributed sites. View the reference design at https://docs.nvidia.com/ai-grid/whitepapers/ai-grid-reference-design/
Hashtags: #AIInfrastructure, #GTC, #GTC2026, #NVIDIARTX, #Telecommunications

No comments:
Post a Comment