OpenAI’s CEO Sam Altman speaks at the AI Summit in New Delhi, India, Thursday, Feb. 19, 2026.AP
Sam Altman wants you to know that he’s just fine. Sure, his company, OpenAI, is reportedly building technology that it fears and some of his former colleagues think he’s a pathological liar, but really? It’s no big deal.
The company’s upcoming model is being finalized and only being given to select group of companies, according to a Thursday Axios report.
This news comes just after the company released policy recommendations on Monday in a 13-page document titled “Industrial Policy for the Intelligence Age: Ideas to Keep People First.” Their “ambitious ideas” claim to add guardrails and safety nets as AI evolves toward a “superintelligence” capable of “outperforming the smartest humans even when they are assisted by AI.”
One terrifying proposal: policymakers should reimagine taxes as AI reduces the need for companies to employ as many workers. OpenAI says the trend could expand corporate profits and capital gains while “erod[ing] the tax base that funds core programs like Social Security, Medicaid, SNAP, and housing assistance.” To ameliorate the potential problem, there could be higher taxes on those capital gains and corporate profits.”
(Disclosure: The Center for Investigative Reporting, the parent company of Mother Jones, has sued OpenAI for copyright violations. OpenAI has denied the allegations.)
And another: create a “Public Wealth Fund” that gives “every citizen—including those not invested in financial markets—with a stake in AI-driven economic growth.”
The week started with a New Yorker investigation that might be the most thorough look yet at Altman and why so many people worry about him being at the helm of such powerful technology.
Reporters Ronan Farrow and Andrew Marantz spoke to more than 100 people, most of whom described Altman as someone with an unrelenting drive for more power. “He has two traits that are almost never seen in the same person,” an OpenAI board member told the pair. “The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.”
Sue Yoon, a former board member, said that Altman wasn’t a typical “Machiavellian villain,” but instead someone who could convince himself of the ever-fluctuating landscapes he portrayed in his sales pitches.
Combining OpenAI’s policy proposals with the New Yorker investigation reveals a familiar story where an authoritarian Silicon Valley leader becomes synonymous with their technology as their personal whims have significant influence on where the industry—and regulation on it—goes next. And regular people are the ones who deal with the consequences.
The policy recommendations feel like a desperate PR move in light of OpenAI’s limited release of its new model. AI companies know that a lot of people hate their technology.
As my colleagues Anna Merlan and Abby Vesoulis wrote last month, many in the AI industry feel that the technology is exciting, terrifying, essential for the future, and too overwhelming to stop all at once.
Yet the New Yorker investigation noted that “Altman publicly welcomed regulation, he quietly lobbied against it,” referencing reporting that OpenAI lobbied the European Union to scale back its AI regulation.
Thank you for thinking of us, Sam!
