CipherTalk

CipherTalk

Share this post

CipherTalk
CipherTalk
The AGI Arms Race: Who’s Really Competing, and What’s at Stake?

The AGI Arms Race: Who’s Really Competing, and What’s at Stake?

Intelligence, power, and the global battle over AI’s future

Meg McNulty's avatar
Meg McNulty
Mar 06, 2025
∙ Paid
5

Share this post

CipherTalk
CipherTalk
The AGI Arms Race: Who’s Really Competing, and What’s at Stake?
2
Share

For months, I’ve been hearing the same thing from AI insiders, policy officials, and researchers across different camps: AGI is imminent. Not in a decade—often, not in five years—in two to three years.

They’re not hedging. They’re convinced.

If they’re right, then the most disruptive technology in human history is arriving inside a Trump second term. That’s not something people are ready for—socially, economically, or in terms of power dynamics both national and global.

Yet the real question isn’t when AGI arrives. It’s what we’re actually racing toward and why.

In the U.S., AGI is framed as a sprint: a national security race, an economic arms race, a technological race. In other words, treating AGI like the Moon landing—an event to win, not just a technology to develop. The finish line? Beating China, locking in AI dominance, and securing the future of intelligence before anyone else does. It’s a framing that drives investment, galvanizes policy, and pushes innovation at breakneck speed.

But not every country sees AI as an arms race. Some view it as an economic tool, others as a governance challenge, and some as a matter of energy security—especially the petrostates now positioning themselves as AI power brokers.

To untangle this, we have to look at how different governments think about AI, the battle over how intelligence itself is defined, and the long history of U.S. technological advancement—where the line between private innovation and state intervention has never been as clear as it seems.

A Race, a Narrative, or a Strategic Play?

In the U.S., AGI is being treated as the Holy Grail—a system that can outperform humans in nearly every cognitive task. But that’s not how everyone sees it.

Satya Nadella, Microsoft’s CEO, has openly dismissed AGI as a distraction. He argues tthe real AI race isn’t about creating a single god-like system, but about companies that figure out how to monetize intelligence at scale. It’s a fair point: technological progress isn’t a binary switch, and AGI won’t be the kind of sudden, Hollywood-style breakthrough that some envision. Technology doesn’t develop in black and white; it moves gradually, shaped by market forces, regulatory shifts, and iterative improvements.

And yet, so much of the language in the U.S. frames AGI as an impending milestone: A banner under which it can rally government investment, industrial alignment, and geopolitical competition. As if we’re in a sprint toward an inevitable breakthrough. AGI, at least in the way Washington talks about it, may be more about narrative and positioning than a clear technical milestone.

Yet this U.S. framing isn’t universally accepted. The rest of the world doesn’t necessarily see AI development as a singular race toward AGI. So why is AGI the dominant term in the U.S.? Part of it is framing—the idea of artificial general intelligence is more compelling (and fundable) than incremental AI improvements. And to be fair, much of that thinking is earned. Much of the foundations of modern AI were built in the U.S.—from the early days of neural networks to the explosion of deep learning. Faith in scientific progress runs deep here, and AGI—an intelligence beyond human capabilities—feels like the natural next frontier.

But the government’s growing interest suggests another reason: strategic positioning. Washington wasn’t deeply involved in AGI development at first, but now that AI has become a geopolitical asset, the language of AGI-as-arms-race helps justify investment, accelerate policymaking, and rally institutional support. The U.S. has a history of tying technological leaps to national security. AGI is becoming the latest example.

The potential impact of this framing is significant. On one hand, it drives urgency, investment, and alignment between government and industry. On the other, it may distort the reality of AI’s gradual development and lock the U.S. into a competitive mindset that isn’t universally shared. If AGI isn’t actually a finish line but a continuous evolution, then what matters isn’t only reaching it first, but shaping how it integrates into society. The question is whether Washington’s narrative will push development in a way that benefits more than just geopolitical dominance.

The U.S. frames AGI as a technological arms race because, for Washington, that’s the framing that drives action. But AGI didn’t start that way. It wasn’t born in a government lab or launched with a defense budget.

Why the U.S. Calls It an Arms Race (And Others Don’t)

Other countries don’t see it the same way as in the U.S. The States’ biggest competitor in this realm, China, rarely uses the term AGI at all. Instead, it focuses on AI as a means of economic and military optimization. (And, to be fair, perhaps Washington’s newfound interest in AGI also benefits the latter domains…) China’s focus is on "cognitive intelligence", a more incrementalist approach that emphasizes increasing levels of automation rather than a singular, human-level breakthrough.

The EU, meanwhile, has taken a different route entirely—viewing AI through the lens of R&D and risk management. Their AI Act prioritizes ethical oversight, regulatory hurdles, and mitigating systemic risks, often in ways that slow down frontier research. Japan and South Korea largely lean into AI for industrial and labor market stability.

Which raises a real question: Is AGI an American fantasy? Are we chasing a milestone that we created? It’s not to say others don’t care—but it is fair to question the basis of the language we use, and why…

The U.S. frames AGI as a technological arms race because, for Washington, that’s the framing that drives action. Space exploration didn’t take off because of a utopian vision for humanity—it was the Cold War that got us to the Moon. Nuclear technology wasn’t pursued with global energy in mind—it was the Manhattan Project, born from a race to outpace the Axis powers. The Internet? DARPA, originally designed for military communication resilience. American technological leaps tend to happen under the banner of national security and strategic dominance, not pure scientific ambition.

But AGI didn’t start that way. Unlike past breakthroughs, it wasn’t born in a government lab or launched with a defense budget. It began in academic research and private industry, shaped by open-source collaboration, startup culture, and tech billionaires rather than military contracts. OpenAI, DeepMind, and Anthropic weren’t designed as government-backed projects, and for years, AGI development was framed around scientific advancement and commercial applications—smarter robotics, better automation, and new frontiers in computing.

That’s changing. Washington now sees AGI not as an industry curiosity but as a strategic imperative. National security agencies are moving in, funding is shifting, and policymakers are rewriting AI’s trajectory through the lens of great-power competition. Why?

Keep reading with a 7-day free trial

Subscribe to CipherTalk to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Meg McNulty
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share