After the Coup
The state of OpenAI and a new [techno-]political order
In November 2023, OpenAI still spoke in the language of mission: AGI for humanity, openness, safety. A year later, a court filing revealed a different architecture of power.
This week, a deposition from co-founder Ilya Sutskever became public as part of Elon Musk v. OpenAI et al. The transcript shows that Sutskever wrote a 52-page memo accusing CEO Sam Altman of “a consistent pattern of lying, undermining his executives, and pitting them against one another.” He said he used disappearing-email to send it to independent directors because he feared it could be removed internally. Sutskever testified that within twenty-four hours of Altman’s brief ouster in 2023, the board discussed merging OpenAI with Anthropic, giving control to the latter. Executives at Anthropic, he claimed, “expressed their excitement.”
A Shift from Ideals to Infrastructure
OpenAI’s story has always carried the veneer of moral clarity. Its founders promised that transparency would keep advanced systems aligned with human interests. Yet the company’s evolution from nonprofit lab to capped-profit corporation—and its $13 billion investment from Microsoft—converted it into an operation with moral language attached. The rhetoric stayed moral while the economics industrialized. The organization now sits at the intersection of national policy and capital markets.
Altman publicly frames AI as an infrastructure competition. In his May 2025 Senate testimony, he warned that heavy regulation would “cripple U.S. competitiveness” and described AI as “as critical as energy or semiconductors.” OpenAI is now a global compute supplier with annual revenue above $13 billion. Its new $38 billion contract with AWS, announced just yesterday (November 3, 2025), expands a network of data centres already linked to Microsoft’s Azure and Nvidia’s GPU pipeline.
The memo from Sutskever reads like a reaction to that industrialisation. His language is moral; his concern is structural. The institution he helped create has become too powerful to question from within. His attempt to remove Altman was a move against the system.
Silicon Valley as Political System
Every major AI company now more-or-less resembles a state. Each has territory (infrastructure), currency (compute credits), and foreign policy (partnerships). Their leaders behave less like entrepreneurs and more like heads of state. Boards act as fragile parliaments; investors operate as central banks.
OpenAI functions as an industrial power, tied to Microsoft and Amazon.
Anthropic positions itself as the ethical alternative while accepting up to $4 billion from Amazon and Google (CNBC).
xAI is a personal regime under Musk, merging with Tesla compute resources and X Corp distribution (Fortune).
DeepMind remains inside Alphabet, subject to corporate oversight but anchored by scientific credibility (Financial Times).
These structures aren’t coincidences. They are strategies for control. The companies that own compute can govern policy. Those that lack it must borrow power through alliances.
The Coup
When Altman was fired in November 2023, the story moved faster than any corporate crisis in memory. Seven hundred employees signed a letter threatening to resign. Microsoft signaled that if the board did not reverse course, it would absorb the staff and shift compute resources to its own division. Within six days Altman was reinstated, the board was reconstituted, and OpenAI’s valuation rose.
That episode ended the myth that founders stand apart from institutions. Altman’s return was not a triumph of charisma but of alignment. The company’s investors, partners, and employees all depended on stability, and his removal threatened it. The board had ideals; the system had momentum. Silicon Valley learned that moral authority means little when operational power runs elsewhere.
Once Microsoft controlled the production environment through Azure, and once Nvidia’s supply decided who could train, the center of authority moved off the org chart and into a set of contracts and reservations that no memo could unwind.
Power in the Rails
Elon Musk’s lawsuit against OpenAI is built on a simple accusation: that the organization he co-founded to make research public has become a private intelligence monopoly. His argument is that OpenAI’s structure and partnerships now serve corporate interests rather than the public good. The newly released Sutskever deposition, entered as evidence, gives that claim weight. It shows that even inside the company, its leaders understood how power had shifted—from ideals to infrastructure.
The testimony does not describe monopolistic behavior in a legal sense. It describes consolidation. Altman built a system of alliances that no board could override. Microsoft controls OpenAI’s production environment through Azure. Nvidia controls its hardware supply chain. Amazon, through AWS, now shares the data-center footprint that underpins ChatGPT’s global reach. These relationships are not secondary. They are the company’s operating core.
Nvidia’s next-generation H200 and Blackwell chips have a long backorder, and access to them determines which labs can train frontier systems. The hyperscalers (AWS, Google Cloud, Azure) set compute pricing and scheduling like central banks managing liquidity. Compute has become the reserve currency of the AI economy.
Musk’s legal strategy is to frame this system as a betrayal of the company’s original charter. His complaint argues that OpenAI’s decision to create a for-profit subsidiary, accept Microsoft’s investment, and close its models to public release constitutes a breach of its founding agreement with him. Sutskever’s memo, written months before the lawsuit, now functions as corroboration. It shows that the internal culture had already drifted toward secrecy and that the board had lost control of a machine that now ran on capital and infrastructure.
Sutskever’s motive was governance, not competition. He feared that leadership decisions were being made without accountability. Yet his deposition confirms Musk’s broader point: that once compute, capital, and partnerships reach critical mass, oversight collapses into formality. The infrastructure begins to govern itself.
The case is unlikely to reverse that trajectory. It has, however, made something plain. The real power in AI no longer sits in algorithms or research labs. It sits in the rails: the global lattice of chips, contracts, and cloud agreements that decide who can build intelligence at scale. Once that lattice consolidates, even the founders become replaceable.
Governance as Competitive Advantage
The past decade rewarded speed. The next one will reward coherence. A company’s board, investors, and supply chain now determine its resilience more than its codebase. OpenAI’s restructure as a Public Benefit Corporation in 2025 gave it room to raise capital indefinitely while maintaining the appearance of mission. That form will likely spread: governance that appears ethical but operates like private equity.
For smaller firms building edge or defense compute, this is the field you enter. Investors and agencies now read your governance documents as closely as your benchmarks. They want to know who controls decisions when boards fracture. Resilience is becoming the metric that defines trust.
The Industrialisation of Ethics
Sutskever’s memo reflects the moment when moral language stopped being governance and became PR. “Safety” and “benefit” still headline every other press release, but they now function as corporate vocabulary, not operational doctrine. Inside the largest AI firms, the work is logistical: securing energy contracts, GPU supply, and long-term access to cloud capacity.
Altman’s public persona—measured, collaborative, confident—masks a managerial reality. His peers are no longer fellow founders but the executives who control infrastructure: Microsoft, Nvidia, and AWS. Their shared discussions are not about philosophical alignment but about energy policy, supply, and scale.
The industry that once promised to democratize knowledge has concentrated it instead. The rhetoric of open progress now maintains investor stability and political favor.
After Founders
The founder myth needs an update. In earlier eras, founders led through vision or technical genius. Altman leads through institutional entrenchment: control of capital, infrastructure, and alliances so extensive that even an internal revolt collapses under their weight. The board and the mission language still exist, but they orbit that machinery.
Charisma did not save Altman in 2023. Leverage did. Nearly seven hundred employees threatened to quit, Microsoft signaled it would backstop talent and compute, and the board reversed course within six days. And Ilya Sutskever’s move against Sam Altman was not a bid to replace a star with another. Sutskever is not a charismatic operator, and the deposition does not read like a leadership audition. It documents a loss of trust. He alleged patterns of secrecy and manipulation. He appealed to a board that had the mission on paper but not the levers in hand.
That board lost because the operating system of the company had already shifted from research culture to industrial scale. Once Microsoft controlled the production environment through Azure, and once Nvidia’s supply decided who could train, the center of authority moved off the org chart and into a set of contracts and reservations that no memo could unwind.
Each step in the OpenAI saga follows its own logic. Sutskever raised a governance alarm. Musk’s lawsuit turned that alarm into evidence of drift. OpenAI’s restructuring legalized the scale it had already reached. The result is a system where the founder remains central, but only because he commands the infrastructure that sustains the institution around him. Governance failed upward. Personality still matters—but only when it controls the machinery built to outlast it.
A Controlled Future
AI has matured into an administrative industry. The next decade will belong to those who govern well under constraint. The companies building the infrastructure of intelligence now decide who participates in progress and who waits in queue for access.
The Sutskever deposition is important because it puts names and dates to what many already understood: that AI is no longer a technical frontier. It is a governance frontier. Its leaders manage a resource that behaves like oil and policy at once.
OpenAI was founded to democratize intelligence and ended up centralizing it. The question now is not who’ll invent the future, but who will govern it.

