Elon Musk vs Sam Altman: Inside the OpenAI Trial That Could Reshape the Future of Artificial Intelligence

sam altman vs elon musk

Elon Musk sat in a witness chair in an Oakland, California courtroom on Tuesday and told a jury he had been robbed. Not of money, exactly, though the $44 million he poured into OpenAI’s early years is part of the claim. What Musk says was stolen from him, and from the public, was something larger: a promise that the most powerful artificial intelligence ever built would belong to humanity, not to a handful of Silicon Valley executives who figured out how to monetize it.

“I came up with the idea, the name, recruited the key people, taught them everything I know, provided all the initial funding,” Musk testified during nearly two hours on the stand. His lead attorney, Steve Molo, had opened the day with a line designed to land in the jury’s gut: “Ladies and gentlemen, we are here today because the defendants in this case stole a charity.”

On the other side of the courtroom, OpenAI’s lead attorney Bill Savitt offered a simpler explanation. “We are here because Mr. Musk turned out to be very wrong about OpenAI,” Savitt said. “We’re here now because Mr. Musk now competes with OpenAI.” The implication was clear: this is not a principled stand for open-source AI. This is a billionaire trying to kneecap his biggest competitor.

The Stakes Are Almost Incomprehensibly Large

The numbers in this case belong in a science fiction novel. Musk is seeking approximately $130 billion in damages, the reversion of OpenAI to its original nonprofit structure, and the removal of CEO Sam Altman and co-founder Greg Brockman from OpenAI’s board. The jury’s verdict will be advisory to Judge Yvonne Gonzalez Rogers, who will make the final ruling, but even the advisory nature of the verdict understates the case’s significance.

If Musk wins, the implications ripple through the entire AI industry. OpenAI’s corporate conversion from nonprofit to for-profit subsidiary, which happened in 2019, becomes a potential legal liability for any AI company that started with one structure and pivoted to another. Investors who poured billions into OpenAI on the assumption that its corporate structure was settled would face massive uncertainty. And the broader question of whether AI development should be governed by profit motives or public interest mandates would move from philosophical debate to legal precedent.

The Origin Story Both Sides Want You to Believe

The founding of OpenAI in 2015 is one of those Silicon Valley origin stories that both sides have heavily mythologized. Musk’s version: he conceived of a nonprofit research organization that would develop artificial general intelligence safely and openly, counterbalancing Google’s dominance in AI research. He recruited the talent, funded the operation, and believed he had a handshake agreement that the technology would remain open-source and nonprofit.

OpenAI’s version: Musk was one of several co-founders, his contributions were important but not singular, and the shift to a for-profit model was a pragmatic necessity driven by the enormous capital requirements of frontier AI research. The company needed billions, not millions, to compete. Nonprofit fundraising could not scale to meet the moment.

Both versions contain truth. Both versions omit inconvenient facts. Musk did leave OpenAI’s board in 2018 after what multiple sources have described as an acrimonious power struggle, reportedly involving a proposal that Musk himself take over as CEO. The for-profit conversion happened a year later, and it was this pivot that transformed OpenAI from a well-funded research lab into the company that would eventually build ChatGPT and attract a $300 billion valuation.

The Competitor Problem Musk Cannot Escape

OpenAI’s strongest argument may be the simplest one: Musk now runs xAI, a direct competitor to OpenAI. His company launched Grok, an AI chatbot that competes head-to-head with ChatGPT. He has raised billions in funding for his own AI ambitions. Whatever principled objections Musk may have had about OpenAI’s corporate conversion in 2019, his motivations in 2026 are impossible to separate from his competitive interests.

Musk’s legal team anticipated this line of attack. During Tuesday’s testimony, Musk framed his concerns as fundamentally about safety and public accountability, not market competition. He argued that concentrating the most powerful AI systems inside a for-profit corporation, with fiduciary duties to investors rather than to the public, represents an existential risk that transcends business rivalry.

It is a compelling argument on its merits. It is also an argument that would be considerably more persuasive if Musk were not simultaneously building his own for-profit AI company while making it.

What the Trial Reveals About AI’s Governance Crisis

Strip away the billionaire drama and the courtroom theatrics, and this trial exposes something genuinely important about the state of artificial intelligence in 2026. The industry that is building the most transformative technology since the internet has no coherent governance framework. It has no agreed-upon rules about corporate structure, no regulatory body with meaningful oversight power, and no mechanism for ensuring that the public interest is represented in decisions that will affect billions of people.

OpenAI was supposed to be different. That was the whole point. Founded explicitly to develop AI safely and for the benefit of humanity, it was structured as a nonprofit precisely because its founders, including Musk, recognized that profit incentives and AI safety could be fundamentally at odds. The fact that the company eventually abandoned that structure, while insisting it remained faithful to its original mission, is the central tension of this trial and of the AI industry itself.

This week, five of the Magnificent Seven tech companies are reporting earnings that collectively reflect over $645 billion in AI capital expenditure commitments. Microsoft, Meta, Amazon, Alphabet, and Apple are pouring money into AI infrastructure at a pace that makes the dot-com boom look modest. The question of who controls this technology, and whose interests it serves, is not academic. It is the defining governance question of the decade.

The Week Ahead in the Courtroom

The trial is expected to continue for several weeks. Sam Altman has not yet taken the stand, and his testimony will be the trial’s marquee event. The cross-examination of Musk, likely beginning later this week, will test whether his legal team can withstand the competitor narrative that OpenAI has built its defense around.

Judge Gonzalez Rogers, who presided over the landmark Epic Games v. Apple case in 2021, has a track record of ruling against corporate consolidation when she believes it harms competition and the public interest. Her courtroom is not friendly territory for the argument that bigger and more profitable is always better.

Whatever the outcome, this trial has already accomplished something that years of Congressional hearings and regulatory proposals have not: it has forced the foundational question of AI governance into a forum where it must be answered, not with press releases and blog posts, but with evidence, testimony, and a binding legal ruling. For an industry that has spent years avoiding accountability, that alone is progress.