The quiet war for AGI infrastructure
The new geometry of power
Electricity bills are about to spike across the U.S. Not because of heat waves (though they sure don’t help), but because AI data centers are coming online fast and pulling hard.
A lot happened this week, and it’s worth stepping back to see the scale:
OpenAI’s Stargate in Norway secured final approval for a $1B launch site. Meanwhile, the project is struggling in the U.S. in funding and relatory purgatory
Crusoe announced a new Wyoming facility backed by gas and energy partnerships
CoreWeave committed $6B to its latest Pennsylvania site
The UAE launched a national AI campus with billions in sovereign investment and chip deals
Canada approved new datacenter zones with public-private financing
xAI Colossus broke ground in Tennessee, aiming for $10B in infrastructure
TogetherAI and others are racing to lock in power, land, and capital
This is a nation state-level sprint to control AI’s physical backbone.
What looks like a power story, is really about physical infrastructure being locked in. Every major AI move now comes with something concrete: land deals, power contracts, chip supply. The last wave of AI was about software. This one is about control: who owns the terrain the models depend on, and whether the systems stay live when demand hits capacity.
AI’s battlefield
This year, the EU launched AI “gigafactories” to secure compute sovereignty: massive data centers seen as critical national infrastructure, even if not formally labeled that way. This wasn’t a symbolic gesture, but a signal that compute has entered the realm of geopolitics. Governments are no longer treating compute as a commodity. It’s now a strategic asset, like energy or rare earth minerals.
OpenAI is working to control more of it through its Stargate initiative. Meta is investing billions into new data centers in Indiana and across Northern Europe. Grok is building clusters fed by gas turbines in Texas, modeled on Tesla’s energy infrastructure.
But behind them are the less-visible operators: Oracle. NVIDIA. Arm. Sovereign funds. Local governments. Every serious player in AI is maneuvering toward one goal: control the infrastructure that will run tomorrow’s intelligence systems. What matters now is where that infrastructure sits, who governs it, and whether it can hold up under the weight of frontier-scale models.
The real advantage won’t come from training first. It’ll come from staying live when everything gets political, expensive, or unstable.
OpenAI, Meta, Grok and What They're Building Toward
The biggest AI players have stopped talking about models. Their focus now is where those models will live.
OpenAI launched the Stargate initiative in early 2025 to build out its own global infrastructure—starting in Texas, Norway, and Japan—with Oracle, SoftBank, and MGX as core partners. While some workloads still run on Microsoft Azure, OpenAI has been shifting capacity to Oracle Cloud and specialized providers like Crusoe Cloud for efficiency and independence. But the U.S. Stargate build is now behind schedule. Internal disagreements over funding, control, and project scope have delayed procurement and execution. The delays aren’t just logistical—they raise broader questions about whether OpenAI can scale its infrastructure as fast as its ambitions (more on this in below).
Meta is doubling down on in-house infrastructure. It’s expanded its LLaMA model family, invested in its own chip designs, and built inference-optimized facilities under projects like the AI Research SuperCluster, Prometheus (to come online next year). Its approach is self-contained but still shaped by internal guardrails.
Grok, through Elon Musk’s xAI, is going full-stack. Their thesis: own the land, the energy, and the models. Grok's facilities are designed to route energy like a Tesla grid and serve inference workloads with minimal delay.
All three face the same challenges: training cost, model latency, and system reliability. The solution isn’t better code. It’s better infrastructure.
What They’re Really Building Toward
I don’t think this is just AGI anymore. Not in the way people mean it. When you watch what Musk, Altman, and Zuckerberg are actually doing—not tweeting, not demoing, but buying, permitting, and building—you start to see a different logic. Watch this long enough and a pattern starts to emerge: in what they build, where they build it, and who they build it with.
Here’s what I think is actually happening, beneath the headlines and public roadmaps:
These builds aren’t about growth. They’re about exit plans.
xAI in Memphis, OpenAI in Norway, Meta across the Midwest—none of them are building for elasticity. They’re building systems that can’t be moved, interrupted, or externally governed. Each site reflects a different theory of collapse.
Musk assumes grid failure and institutional unreliability, so he builds off-grid with turbines and Megapacks.
Altman expects political friction, so he spreads Stargate across stable allies.
Zuckerberg sees open models as leverage, but he keeps tight control over how and where they actually run. The real deployment happens on Meta-owned chips, inside Meta-owned data centers, under Meta’s rules. Open weights, private infrastructure.
Nothing here is designed to scale back. It’s designed to outlast.
The model gap is narrowing. What matters next is everything that happens after training.
Most leading models now perform within a tight range on public tasks. GPT-5, LLaMA 4, Grok 4… they’re all capable enough, and soon, I presume the differences will no longer drive adoption on their own for the average consumer. What’s starting to matter more is how models behave in deployment: how fast they recover from failure, how well they handle resource contention, how they route traffic under pressure. NVIDIA’s positioning for vendor lock-in of the full stack. AMD, Intel, and the sovereign providers know this, and they’re racing to build their own integration layers.Chip flows now define geopolitical alignment.
In 2025, chip supply is a foreign policy instrument. The U.S. selectively approved NVIDIA exports to the UAE in May, enabling G42 to receive direct shipments of H200s under a white-listed sovereign AI agreement. Meanwhile, China is racing to replace NVIDIA entirely, ramping up SMIC-led domestic GPU production and coordinating buildouts with Alibaba Cloud and CETC to fuse AI with national defense systems. In both cases, chips don’t move freely. They move through alliances. If a state doesn’t have its own fabs (and most don’t), the question becomes: whose chip policy do they depend on? That answer now shapes where and how AI infrastructure gets built. Compute zones are the new geopolitical blocs.
Owning infra is expensive… and hardware has its limits.
OpenAI’s Stargate, Meta’s Prometheus, xAI’s turbine-powered clusters in Texas... These aren’t just buildings full of chips; they’re the future of intelligence, pulling in land, power, and compute like nothing we’ve ever seen. But here’s the thing: they’re only as good as the networks that bind them. We need networking that’s not just fast. It’s got to be mind-blowingly seamless, moving oceans of data in a blink, keeping every model humming no matter how crazy the demand gets. Without it, all the power contracts and chip deals in the world are just a fancy house of cards. And we’re still not there, not without expnsive, difficult-to-install-and-maintain hardware.
So why are we stuck? Because we’re leaning on creaky, old networks that can’t keep up. Upgrading them costs a fortune, and nobody wants to untangle the regulatory mess or play nice across borders. Everyone’s so obsessed with grabbing land and energy that networking’s been left in the dust. It’s not the shiny part, but it’s the soul of these systems. Get it right, and we’ll keep these AI giants alive and unstoppable, even when the world gets messy.
AGI is hard. But running it without limits will be near impossible.
Every serious group chasing AGI knows they’re not just building a model. They must build the conditions to operate it on their own terms, and that is why these players are locking in land, power, and regulatory insulation now. The real advantage won’t come from training first. It’ll come from staying live when everything gets political, expensive, or unstable. Whoever controls the environment controls the outcome.
The New Geometry of Power: Sovereign Compute and Strategic Zones
In March 2025, the European Commission quietly adopted a new definition: “Sovereign AI Infrastructure.” The framing was deliberate. It acknowledged that data centers, once the province of logistics, are now essential to national security. For the U.S., owning the compute layer is critical to ensure defense AI systems—across intelligence, command, and autonomous platforms—remain under national control. Europe sees it as a way to reduce dependence on U.S. or Chinese cloud platforms, especially as AI models increasingly mediate public services and infrastructure. In the Gulf states, sovereign datacenters serve dual purposes: geopolitical independence and commercial leverage in a compute-starved global market. And in China, AI infrastructure is already fused into military-civil fusion strategy—viewed not as commercial tooling, but as a base layer for strategic deterrence.
All of these projects are part of a broader shift. Around the world, countries are racing to build or control AI data centers—not because of cloud economics, but because they now understand what these facilities really represent: strategic leverage. These centers power everything from military inference systems to national healthcare models, and whoever owns them sets the rules.
Norway’s Stargate facility—launched in partnership with Aker and nScale—is the first Stargate site to reach full regulatory approval. It was formally greenlit today, and its timing matters. The U.S. site remains delayed.
The UAE and Qatar are pushing ahead with sovereign data center infrastructure of their own, backed by energy wealth and less public friction.
China’s hyperscale buildout continues at speed, relying on domestic chips and state-coordinated logistics.
Against that backdrop, Norway becomes the test case for whether OpenAI’s public-private model can actually deliver. It launches with 230 MW, 100,000 NVIDIA GPUs, and full hydropower. Norway gets a clean energy industrial foothold. OpenAI gets a political and operational win in Europe—but it’s no longer the only one moving fast.
To be fair, OpenAI isn’t alone. Musk’s xAI is expanding Colossus in Memphis toward a million-GPU targe; Meta is adding new AI data centers across Iowa and Illinois; Oracle, CoreWeave, and Together are all pushing new gigascale builds with custom interconnects, regional siting strategies, and energy deals in motion.
Other countries are moving quickly. The UAE announced in June 2025 that it had secured direct supply of H200 GPUs through a strategic agreement between AMD and G42 to accelerate sovereign AI training. Saudi Arabia is building sovereign datacenters in NEOM, connected to a broader semiconductor zone. India is finalizing a joint venture with TSMC and NVIDIA to launch a semiconductor and AI datacenter hub in Gujarat, with support from the Ministry of Electronics and IT. And Singapore is deploying liquid-cooled urban AI supernodes to reduce inference latency and localize critical compute, supported by its Green Datacenter Roadmap initiative.
The point is not who wins AGI. It’s who stays relevant once it arrives.
This is the heart of the shift. AGI infrastructure isn’t just a technical challenge, but a governance problem.
None of this works without the supply chain.
NVIDIA remains the central constraint. Its control over supply of H100s, H200s, and the Blackwell generation puts it at the center of nearly every national AI strategy. But it’s not the only player. AMD is gaining ground with MI300X deployments, and its recent agreement with the UAE’s G42 puts it in direct competition for global sovereign AI deals. That said, NVIDIA’s lead still rests on more than chips. Its work with Oracle and OpenAI spans end-to-end infrastructure: cluster design, thermal engineering, scheduling, and regional failover. The hardware matters—but the integration stack is what locks it in.
Oracle, once dismissed as legacy, is emerging as a sovereign cloud partner. It’s not trying to match AWS. It’s carving out territory where performance, predictability, and physical access matter. The Stargate partnership reflects this: Oracle supplies power density, fiber, and real estate. NVIDIA supplies the compute. OpenAI builds the intelligence layer on top.
Arm, meanwhile, is using this moment to reassert itself in the datacenter. Its push to co-design chips tailored to Stargate-style buildouts is a bet on a world where CPUs are no longer general-purpose, but tightly coupled to AI pipelines and control logic.
Of course, I can’t write about AI infra without mentioning TSMC and ASML. These sit behind the entire stack. TSMC remains the only foundry capable of producing NVIDIA’s most advanced chips at scale, and its Arizona fabs (delayed though they are) are still central to U.S. efforts to onshore AI-critical manufacturing. ASML, as the sole supplier of extreme ultraviolet (EUV) lithography machines, holds the chokepoint for advanced node production globally. No EUV, no 3nm. That concentration has turned toolchain access into a geopolitical issue, with export controls tightening across the Netherlands, Taiwan, and the U.S. In that context, every AI datacenter—whether built by OpenAI, xAI, or Meta—still traces its limits back to a small number of fabrication tools and fabs. AI strategy starts at the wafer.
Stargate, Norway, and the Infrastructure of Ideology
Stargate isn’t remarkable because of its size. It matters because of what it signals: that future control over AI won’t come from better models, but from who builds and governs the infrastructure beneath them.
When OpenAI, SoftBank, MGX, and Oracle announced the project in January 2025, it was framed as a public-private leap: $500 billion over five years to build the backbone of AI capability. But the internal documents reviewed by the Wall Street Journal tell a more fractured story. Disputes over funding. Delays in procurement. Disagreements between OpenAI and SoftBank over governance. Concerns from Oracle about return on capital.
This is the heart of the shift. AGI infrastructure isn’t just a technical challenge, but a governance problem. Who gets access, who sets limits, who controls the fallback paths when things go wrong. These questions aren’t peripheral. They define the shape of power in a world run on inference. The battle over data centers is a battle over what AI will be allowed to do, and for whom. Everything else flows from that.
In Norway, some of this is smoothed by local governance. But the questions persist. As Stargate expands to the UAE and Japan, the answer will vary by jurisdiction. And beyond Stargate, the rest of the world is building fast. The UAE has secured chip supply and is coordinating infrastructure at national scale. China has fused datacenter strategy with military policy. Japan is investing billions to retain industrial relevance. The map is shifting quickly. The UAE has built a vertically integrated AI stack around state-owned capital, chips, and cloud. Qatar is backing modular nuclear datacenters near Doha. China continues to scale compute through domestic hardware and centralized logistics. These are competing visions of control.
This is where the infrastructure war ends up. Not in model releases or open-source benchmarks, but in the physical, political, and economic alignment behind the datacenters that run them. AGI will happen somewhere. The fight now is over whose jurisdiction it lands in—and under whose terms.


