Elon Musk, left and Sam Altman.Mother Jones; Gage Skidmore/ZUMA: Florian Gaertner/dpa/ZUMA
Elon Musk and Sam Altman are set to square off in court over OpenAI’s mission.
In his lawsuit, Musk accuses Altman of illegally transforming OpenAI from a nonprofit into a massive for-profit organization—one that is expected to go public as early as this summer at a valuation of nearly $1 trillion.
Here’s the messy backstory: The week after Musk sued OpenAI in 2024, the company claimed that its founders realized early in its development that it needed to raise money to obtain enough computing power and other resources to build its AI. To acquire investors, it first had to become a for-profit company. The nonprofit—now called the OpenAI Foundation—created the for-profit OpenAI as a subsidiary. OpenAI claimed in December 2024 that, back in 2017, Musk agreed that a for-profit move was necessary, but wanted “absolute control” as sole CEO—and a merger with Tesla. Following a reported power struggle with Altman to take control of OpenAI in 2018, Musk left the company’s board. OpenAI said that Musk left to avoid potential conflicts of interests as the CEO of Tesla.
Musk is now demanding that the billions of dollars made by the for-profit be returned to the OpenAI Foundation. He also wants Altman to be kicked off the leadership team of both the for-profit and non-profit organizations.
OpenAI was founded in 2015 by Musk, Altman, and nine others. Musk and Altman were named co-chairs, and on the day of its launch, the nonprofit stated its goal to “advance digital intelligence” in a manner “to benefit humanity as a whole, unconstrained by a need to generate financial return.” In its 2018 charter, the company promised to halt focusing on its own models and help another group “if a value-aligned, safety-conscious project comes close to building AGI [or artificial general intelligence that outperforms the work of humans] before we do.”
To put it lightly, this is a far cry from what the company looks like today. It’s got energy-guzzling data centers, a chatbot that’s been involved in multiple mass shootings, and, according to what tech journalist Karen Hao told us in 2025, poses “the greatest threat that we’ve seen to democracy to date.” Oh, and not to mention the deal with the Pentagon to provide its technology for military purposes. (Following backlash from users, Sam Altman posted on X last month that they would amend their agreement to “not be intentionally used for domestic surveillance of U.S. persons and nationals.”)
OpenAI has gone from trying to benefit humanity to making humanity clean up its messes. As I wrote earlier this month, the company released 13 pages of “ambitious ideas” to add safety nets as AI advances to outperform human beings, even those who are assisted by AI.
Altman and OpenAI’s decisionmakers clearly don’t care about their lasting damage. They attribute the growing animosity toward AI to the struggle to, as OpenAI co-founder Greg Brockman put it last week on the science and tech podcast Core Memory, “help people really understand what it is that this technology can do for them.”
But there’s a difference between what AI can do and what it should do. While Musk and Altman fight over OpenAI’s structure, and Musks licks his wounds after potentially losing yet another power struggle, they don’t seem to be listening in any real way to the people this technology is meant to help.
(Disclosure: The Center for Investigative Reporting, the parent company of Mother Jones, has sued OpenAI for copyright violations. OpenAI has denied the allegations.)
