Musk vs. OpenAI: Inside the Trial That Could Break Big AI
Seven hours of testimony, a shocking admission, and a $852 billion question nobody in Silicon Valley wanted answered in court.
The legal battle between Elon Musk and OpenAI moved into its second week, with the world’s richest man having spent more than seven hours on the witness stand, accusing CEO Sam Altman of stealing a charity and betraying a promise that Musk says he funded from scratch. The trial, presided over by U.S. District Judge Yvonne Gonzalez Rogers at the Ronald V. Dellums Federal Courthouse in Oakland, California, opened on April 28 and is expected to run for approximately four weeks. Microsoft is also named as a defendant, accused of enabling OpenAI’s shift away from its nonprofit mission through its multi-billion dollar investment in the company’s for-profit subsidiary.
At its centre, the case is a fight over OpenAI’s transformation from a public charity into a commercial giant now valued at $852 billion. Musk argues that the transformation was a legal breach of the founding charter he helped write. OpenAI argues he is a bitter ex-partner who lost control of a company and is now trying to litigate his way back in.
What Musk Says He Was Promised
OpenAI was founded in December 2015 with a charter declaring it would develop open source technology for the public benefit and was not organised for the private gain of any person. Musk co-founded the organisation alongside Altman and Brockman, providing the initial funding and, by his own account, the early architecture of the company’s strategy.
On the stand, Musk testified that he recruited top AI researcher Ilya Sutskever away from Google, a move he said ended his friendship with Google co-founder Larry Page permanently. He also described leveraging personal relationships with Microsoft CEO Satya Nadella and Nvidia CEO Jensen Huang to secure the computing resources OpenAI needed to survive its early years. ‘The only one who could actually call Satya Nadella and have him pick up was me,’ he told the jury.
Musk invested approximately $38 million in OpenAI between December 2015 and May 2017. He testified that he would not have contributed any of it had the intention been to build a for-profit company. “It was specifically meant to be for a charity that does not benefit any individual person,” he said. “I could’ve started it as a for-profit, and I specifically chose not to.” He told the jury: “I gave them $38 million of essentially free funding, which they then used to create an $800 billion for-profit company. I literally was a fool.“
OpenAI’s Defence: A Lawsuit Built on Losing
William Savitt, OpenAI’s lead attorney, who previously represented Musk and Tesla, which is worth pausing on for a second, delivered a blunt opening statement that framed the entire case as a wounded ego dressed up as a legal principle.

‘We are here because Mr. Musk didn’t get his way at OpenAI,’ Savitt told the jury. ‘He quit, saying they would fail for sure. But my clients had the nerve to go on and succeed without him. Mr. Musk may not like that, but it’s no basis for a lawsuit.’
OpenAI also released internal emails from 2017 showing Musk himself proposed merging OpenAI with Tesla and assuming personal control. Thus directly contradicting his current posture as the defender of its nonprofit mission. OpenAI’s position is that no formal guarantees were ever made about the company remaining a nonprofit indefinitely. Musk’s lawsuit is a competitive manoeuvre timed to undermine a rival while his own AI venture, xAI, tries to close the gap.
Three Days on the Stand
Musk’s testimony ran across three days and was defined, above all else, by friction. During cross-examination, he clashed repeatedly with Savitt over the framing of questions. “Your questions are definitionally complex, not simple. It is a lie to say they are simple,” Musk told the attorney at one point. While he was pushed on a straightforward question about OpenAI’s formation, he compared the line of questioning to asking “Have you stopped beating your wife?” Judge Gonzalez Rogers intervened immediately. “We are not going to go there,” she said, to laughter in the courtroom.
When asked during cross-examination whether he had ever called an OpenAI employee a jackass, Musk replied that he might have done so on more than one occasion, then insisted he does not lose his temper or yell. OpenAI’s attorneys entered emails into evidence that told a different story. By the end of day three, court reporters noted Musk appeared visibly fatigued, taking frequent sips of water and rubbing his forehead between questions.
It is hard to square that image, the composed, visionary founder fighting for humanity, with a man who, under oath, cannot keep his answers consistent for three consecutive days. Whether the jury notices that gap is another matter entirely.
His direct testimony, meanwhile, centred on painting himself as the indispensable architect of OpenAI’s early success, the recruiter, the fundraiser, the connector without whom the company would not have existed. OpenAI’s cross-examination was aimed squarely at dismantling that image, presenting him instead as a man who sought majority control of the board for himself, did not get it, and walked away.
The Admission Nobody Saw Coming
The most significant moment of the trial’s first week came on Thursday, April 30th, when Musk acknowledged under cross-examination that his own AI company, xAI, had used OpenAI’s models to help train its chatbot, Grok. It’s a process known in the industry as distillation.
Distillation works by systematically querying a more advanced AI model and using its responses as training data for a newer, smaller one. It dramatically reduces the cost and time required to build a competitive product. OpenAI, Anthropic, and Google have all stated this practice violates their terms of service, and the U.S. government has accused Chinese AI firms of industrial espionage for deploying the same technique against American labs.

When asked directly whether xAI had distilled OpenAI’s technology, Musk first framed it as a general industry norm. “Generally, AI companies distill other AI companies,” he said. Pressed for a direct answer, he replied: “Partly.” The admission drew audible gasps from those present in the courtroom.
And honestly, that one word ‘partly‘ might be the most consequential thing said in that courtroom all week. Not because distillation is rare. It almost certainly happens across the industry more than any of these companies would like to admit publicly. But Musk saying it out loud, under oath, while simultaneously suing OpenAI for ethical violations? That is the kind of contradiction that tends to follow a case all the way to its verdict.
OpenAI did not respond to press requests for comment on the admission at the time of reporting. Legal analysts noted it could significantly complicate the moral framing that Musk’s team has built the case around.
The Judge Shuts Down the Doomsday Argument
Throughout the week, Musk’s legal team pushed to reframe the trial around the existential dangers of artificial intelligence, arguing that OpenAI, left unchecked as a for-profit company, represents a genuine threat to humanity’s survival.
Judge Gonzalez Rogers drew a firm line. When Musk’s attorney Steven Molo declared before the court that “we all could die as a result of artificial intelligence,” the judge responded sharply. “Despite these risks, your client is creating a company that’s in the exact space,” she said, referring to xAI. “I suspect there are plenty of people who don’t want to put the future of humanity in Mr. Musk’s hands.” She later ended the argument entirely: “This is not a trial on whether or not artificial intelligence has damaged humanity.”
Expert testimony on extinction risk has been barred. The trial has been confined strictly to the question of whether a legal contract was breached. The existential framing that Musk’s team had hoped would resonate with the jury has been ruled out of scope.
The judge’s point about xAI deserves more attention than it has received. Musk has spent years positioning himself as the only person in tech who takes AI safety seriously. Building a competing AI company while suing your former one for being unsafe is, at minimum, a position that requires some explaining.
What Is at Stake
A Musk victory would force OpenAI to unwind its for-profit restructuring, likely derailing the company’s IPO at a valuation approaching $1 trillion, and it would result in the removal of Altman and Brockman from their roles. The consequences for the broader AI industry would be far-reaching.
For Musk, the risks of losing are also significant. xAI is expected to go public as part of SpaceX as early as June, targeting a valuation of $1.75 trillion. A defeat that puts the distillation admission and the 2017 emails on the public record could damage the credibility he is trying to build around xAI ahead of that debut.
In a candid moment on the stand, Musk assessed the current AI landscape without the bravado he typically projects on X. He ranked Anthropic at the top of the industry hierarchy, followed by OpenAI, Google, and Chinese open-source models, placing xAI behind all of them. He described his company as a small operation by comparison, a few hundred employees against the thousands at rival labs. It was a notably measured admission from a man who has publicly claimed xAI would soon outpace every competitor on the planet.
What Comes Next
Musk’s testimony concluded at the end of the week. OpenAI President Greg Brockman is expected to take the stand in the coming days, alongside expert witnesses for both sides. The trial has approximately four weeks remaining.
Outside the courthouse, protesters have lined the streets throughout the proceedings. Inside, a jury is being asked to decide whether a promise made in 2015 was a binding legal obligation, and whether the AI industry’s most powerful company was built on a broken one. The answer, whenever it comes, will have consequences that extend well beyond Oakland.