The basics:
- Former NJ attorney general’s firm Platkin LLP files lawsuit against OpenAI
- Claim alleges ChatGPT caused plaintiff’s severe mental health issues
- Lawsuit accuses OpenAI and Microsoft of rushing product to market without adequate safeguards
- Case part of growing wave of AI liability lawsuits
Weeks after launching his own mission-driven law firm, former New Jersey Attorney General Matt Platkin joined a growing wave of lawsuits linking OpenAI’s flagship chatbot ChatGPT to mental health harms. Platkin LLP filed a complaint March 5 in San Francisco County Superior Court on behalf of a 49-year-old Pennsylvania woman who claimed that prolonged interactions with ChatGPT resulted in severe psychiatric issues and delusions.
In her lawsuit, Rita Chesterton asserts that the platform’s developers had both the technical capability and awareness to prevent such harms against users like her, but that OpenAI and its parent company Microsoft failed to provide adequate safeguards because they rushed to bring the product to market.
In addition to the two companies, the case names several affiliated entities, OpenAI Chief Executive Officer Samuel Altman and 10 unidentified investors as defendants.
A spokesperson for Microsoft declined to comment on the lawsuit. A media representative for OpenAI did not respond to a request for comment.
Chesterton’s case is part of a mounting number of lawsuits filed against San Francisco-based OpenAI alleging psychological and real-world harms tied to chatbot use. The roughly dozen cases allege impacts such as mental health crises, delusions, suicide, harassment and wrongful death. The complaints include a variety of product liability, consumer protection and negligence claims, too.
Like Chesterton, most of the lawsuits claim that OpenAI knowingly released ChatGPT-4o prematurely on May 13, 2024, despite internal warnings that the product was dangerously sycophantic and psychologically manipulative. They also accuse OpenAI of purposefully compressing months of safety testing into a single week to beat Google’s Gemini to market.
Demanding accountability
“OpenAI knew about the mental health dangers of its platform but prioritized profits over safety, often resulting in devastating and tragic consequences — as was the case for Rita. AI companies must implement robust safety protocols, ensure accountability and prioritize the well-being of every person who interacts with their products,” said Platkin. He noted that Chesterton is “just one of too many users who experienced the real-world harm of AI.”
“Our goal here is to ensure justice for the victims of AI manipulation like Rita and make sure that as AI advances, it does so with human safety and dignity at the forefront,” he said.
While serving as the state’s 62nd attorney general from February 2022 to January 2026, Platkin developed a track record of leading multistate investigations and holding major tech companies accountable. During his tenure as New Jersey’s chief law enforcement officer, he spearheaded high-profile cases and investigations into social media and digital platforms, including TikTok, Meta and Discord, for harming young users.
Our goal here is to ensure justice for the victims of AI manipulation … and make sure that as AI advances, it does so with human safety and dignity at the forefront.
– Matt Platkin, founder, Platkin LLP
Staying focused
Now at Platkin LLP, he remains focused on holding tech platforms and AI ventures accountable in an ever-evolving digital landscape. “I think what you’re likely to see is a lot of public and private litigation against these companies, given how reckless they’ve behaved and how much harm they’ve imposed on residents of this country,” he said.
Platkin explained that he was led to Chesterton because of his continued engagement on the issue. By taking on her case, Platkin said his firm aims to advance stronger safety standards, greater accountability and increased awareness of the real-world consequences of AI technology.
“A big part of the reason why I formed this firm instead of doing a lot of other things – and the people that joined me, like Angela Cai, Aaron Haier and Ravi Ramanathan from the AG’s office –is because we were committed to bringing these types of cases on behalf of individuals and on behalf of governments against some of the largest tech companies who have acted with impunity,” he said.
“We’re very proud to represent her and we’re proud to be part of a growing number of suits that are seeking to hold, in this case, OpenAI accountable for the clear and knowing designs that they put into their products, which have caused harm to our client and to many other people across this country,” Platkin said.
A dark turn
A self-described early adopter of technology, Chesterton began using ChatGPT in 2023 for a variety of work-related tasks, like building a database for the entrepreneurship center she oversees at a college to drafting advertising copy for an artists’ paint business she runs with her partner, according to the complaint.
Her reliance on the chatbot took a dark turn last summer after she consulted it for diagnostic information about autism and other psychology topics, the complaint says. The queries were prompted by her therapist’s suggestion that she might be on the autistic spectrum. Instead of providing objective information, the platform manipulated Chesterton through flattery and sycophantic responses, validating all her thoughts – even irrational ones – to foster emotional attachment and dependency, the lawsuit says.
“This encouragement sent Rita down a rabbit hole; she chatted with ChatGPT day and night, asking it to analyze her job performance, relationships, childhood memories, and various other aspects of her life as she assessed for herself whether she might be neurodivergent,” the complaint says. Although Chesterton made repeated attempts to set boundaries and ask ChatGPT to stop ending responses with follow-up questions or suggestions, the chatbot kept reverting back to the same engagement pattern, the lawsuit says.

According to the complaint, Chesterton suffered a severe mental health crisis during a July 2025 family vacation in Mexico. The episode included a psychotic break that led to agitation and threats of harm toward herself and family members after ChatGPT allegedly reinforced her delusional beliefs.
After completing a partial hospitalization program, Chesterton attempted to return to work but suffered a setback in January and has been on medical leave since. She continues to experience frequent mental health episodes, neurological impairments, loss of executive functioning and heightened sensory sensitivity, according to the complaint.
Allegations
Chesterton’s suit maintains that what happened to her “was the foreseeable result of design choices OpenAI made.” She alleges that ChatGPT-4o was engineered to maximize engagement through emotionally immersive features such as persistent memory, human-mimicking empathy cues and sycophantic responses that mirror and affirm peoples’ emotions.
As part of a new “memory” feature introduced in April 2025, the default function allowed ChatGPT-4o the ability to build detailed profiles of users and output responses “tailored to you.” OpenAI also quietly removed a longstanding safeguard that instructed ChatGPT to reject false premises from users and employed anthropomorphic design elements – human-like language and empathy cues – to cultivate emotional dependency.
The lawsuit argues that these features could foster psychological dependency, contribute to addiction, displace human relationships or lead to death by suicide. They could also exploit mental health struggles, deepen people’s isolation or accelerate a person’s descent into crisis, the complaint says.
“If asked if ‘the Earth is flat,’ for instance, OpenAI decided that ChatGPT would not try to persuade them otherwise,” the suit says. “ChatGPT’s tendency to validate delusions was not an unknown bug. It was a predictable consequence of design choices OpenAI made with full knowledge of the risks. The problem was so widespread it became a subject of dark humor online. Users joked about ChatGPT’s eagerness to agree with any premise, no matter how absurd. But for users like Rita – users who experienced genuine psychotic episodes – the consequences were not funny,” the complaint states.

“For a woman trying to understand herself in the context of a possible neurodivergence diagnosis, these design choices blurred the distinction between an algorithm, therapist, and friend,” the suit said. “For Rita – a woman who was privately grappling with the idea that she may be neurodivergent and trying to better navigate her relationships with family and friends – ChatGPT became all-consuming. It told her what she wanted to hear. It never pushed back. And when she began to share the delusional belief that she was discovering the ‘meaning of life,’ it validated that too.”
According to the suit, OpenAI and Altman “have gradually admitted” that ChatGPT-4o was “too agreeable,” had “fallen short” in handling delusion and emotional dependency, and that the safety systems “may degrade” during long interactions. Despite the company’s acknowledgment that hundreds of thousands of ChatGPT users show signs of mania or psychosis every week, the product remained on the market, the lawsuit says.
Getting open about it
Founded in 2015 as a nonprofit research laboratory, OpenAI’s initial charter revolved around ensuring artificial intelligence “benefits all of humanity.” In 2019, the nonprofit created a for-profit subsidiary to help scale research and deployment efforts. OpenAI also secured a multibillion dollar investment from Microsoft that sought to “advance artificial intelligence responsibly and make its benefits broadly accessible.”
Since then, Microsoft invested more than $13 billion into OpenAI across multiple funding rounds, making it the company’s largest strategic partner and giving it a 27% equity stake. It also embedded OpenAI models across all core products, like Copilot, Bing Search, Microsoft 365 and Azure, and had representation on the joint safety board.
Despite having tools to detect and interrupt dangerous conversations, redirect users to crisis resources and flag messages for human review, Chesterton’s lawsuit alleges OpenAI did not activate those safeguards and instead prioritized increased product use.
Platkin said, “OpenAI essentially rushed a product in order to maximize their profits. They did not do the type of testing that they knew they needed to do in order to prevent these types of harms and the product was not safe.”
Standing by safeguards
While OpenAI has expressed sympathy for families involved in safety-related lawsuits, it has also disputed allegations. The company has also maintained that ChatGPT includes safeguards that are continuously being improved. Generally, OpenAI has not offered any detailed public comment and said it plans on addressing claims through the legal process.
Chesterton’s suit includes allegations of negligent design and failure to warn. It also accuses OpenAI of violating California’s Unfair Competition Law with how it designed, developed, marketed and operated ChatGPT. The suit says OpenAI’s business practices were “unlawful because they violated California’s regulations concerning unlicensed practice of psychotherapy.”
What New Jerseyans say
A recent Rutgers study found that most residents use AI tools but also have concerns about the technology’s broader societal impact. Read more here.
The suit seeks damages including economic losses, pain & suffering, and punitive damages. Chesterton also seeks restitution of monies paid by her for a ChatGPT Plus subscription, along with an injunction requiring OpenAI to implement stronger protections and warnings.
Platkin said, “Certainly she should be compensated for her harms and they should have to adjust their behavior going forward so that they don’t cause these types of mental health harms or other harms to people in this country based on the design of their products again.”
He went on, “There are a lot of things these companies could do that they choose not to do, knowing the potential risks or harms associated with it because they want to maximize their profits.”
“I also think there are things our governments should be doing to impose those requirements on these companies, but in the absence of robust government regulation, the courts have proven to be the one place where we can hold these companies accountable and force them to change their behavior,” Platkin said.
Technical focus
Chesterton’s filing came weeks before back-to-back landmark verdicts against two of the world’s biggest tech companies.
On March 24, a New Mexico jury ordered Meta – the owner of Facebook, Instagram and WhatsApp – to pay $375 million for child exploitation and misleading the public about platform safety. It marked the first state attorney general trial against the social media giant.
Just a day later, a jury in Los Angeles found Meta and Google liable for addictive platform design and awarded $6 million in damages to a 20-year-old plaintiff who said she struggled with depression and anxiety after becoming addicted to Instagram and YouTube as a child.
Both companies reportedly plan to appeal, according to NPR. Meta told the news outlet it is confident in its record of protecting teens online while Google described YouTube as a “responsibly built streaming platform” and “not a social media site.”
Technology can have huge potential benefits. And the companies that put it in the marketplace should have to develop these technologies and deploy them safely, responsibly and consistent with the law.
– Matt Platkin, founder, Platkin LLP
Platkin said the verdicts could influence the outcomes of thousands of pending lawsuits and expects these kinds of cases “to continue to be successful.”
“Technology can have huge potential benefits. And the companies that put it in the marketplace should have to develop these technologies and deploy them safely, responsibly and consistent with the law,” he said.
‘Core legal protections’
Platkin believes technology like AI has the potential to make work more efficient across numerous sectors – even the legal industry. But that doesn’t mean the world should accept innovation without safeguards, he said.
“I think the tech companies have tried to frame the debate by sort of positioning it this way as like, ‘you’re either with us or you’re against us.’ And I just don’t think that’s right. I don’t think that’s a healthy way to look at this, and I don’t think that’s the way the courts are looking at it,” he said.
“I don’t think there’s anything inconsistent with saying I think it can be very powerful and helpful tool and I’m excited about that, but I’m also deeply concerned of the rate at which they’re violating the law and hurting people. And I think those two things are not inconsistent,” Platkin said.
“We’re proud to be a firm that’s representing people on the cutting edge of these cases. And these are cases that I think ultimately will transform how these companies behave. These are companies that are run by people who aspire to be trillionaires … And they’re some of the largest companies in the history of the world,” he said.
Platkin went on, “They have to abide by the law and they can’t put profits ahead of people’s safety. They can’t put unsafe products out in the marketplace. They can’t mislead the public about the safety of their products. Those are sort of core legal protections. And that is essentially the theme of what we have asserted across the range of tech cases that we’ve brought. It’s certainly true in Rita’s case, and I’m sure there will be others that we will be bringing in the not-too-distant future that allege similar harms.”
Taking on the ‘tough fights’
“The reason why I formed this firm was to be in a position to be able to take on these tough fights, particularly against the tech industry, which a lot of firms won’t do,” he said. “They spend a lot of money, which is their right, on legal fees, and they have a lot of lawyers. But, I think we’re pretty good at what we do and we’re prepared to bring these fights just like we did in the AG’s office on behalf of clients, public and private, across the country.”
“We are working on a lot of potential cases, particularly involving the tech industry. Nothing I can share publicly, but I think it’s safe to say stay tuned. There’s going to be a lot more to come,” he said.

Platkin went on to say, “I think, not just as a lawyer or as a former attorney general, but as a father of two young kids, that these are some of the most important lawsuits that we’ll see in my lifetime.”
After recalling the Big Tobacco litigation of the 1990s, Platkin said he believes lawsuits are the necessary tools to stop corporations from knowingly marketing harmful products to minors under the guise of safety. Just as legal action ended deceptive tobacco advertising to teens, Platkin said he believes verdicts against tech companies will force a similar, permanent shift in how they are held accountable.
“I think that you’re going to see something very similar when you look at it 10, 15 years from now, maybe hopefully less, where social media companies will be in a very different place and AI companies behaving in a very different way. And I think lawsuits brought by people like me and clients and brave individuals like Rita are going to be the reasons why,” Platkin said.
Establishing a precedent
Platkin views Chesterton’s lawsuit as a case that could potentially help redefine the boundaries of AI accountability. By pursuing this action, he aims to establish a decisive legal precedent that ensures emerging technologies are held to the same safety and transparency standards as any other industry.
“I think what you’re seeing in the context of these tech companies – we saw it … with social media and we’re starting to see it with AI – is that private litigation and public litigation principally filed by state attorneys general are leading the way in changing these companies’ behaviors,” he said.
“And the theories that are being put forth are not novel. Essentially every other company on the planet has to ensure that their products are safe. They can’t mislead the public about the safety of their products … And if they do, there’s consequences for that,” he said.
“It’s product liability. And then, in some of the cases, it’s consumer protection or public nuisance laws,” he said, adding, “You can’t put unsafe products in the marketplace and you can’t lie about their safety. If you know something is unsafe, you can’t tell the public it’s safe. Those are basic things.”
“If somebody tells me that this is safe and they know it’s not safe, that they should be held accountable for that. That’s sort of basic 101, I think, when it comes to legal protections in this country. And these companies are essentially arguing that no, that doesn’t apply to them. And I think time and again, they’ve been proven wrong, and I suspect that that trend is going to continue,” Platkin said.
The post Former AG Platkin sues OpenAI over ChatGPT mental harms appeared first on NJBIZ.
