The Senate Judiciary Committee on Thursday advanced a bill that would bar artificial intelligence companies from letting children use AI companions.
The bill, known as the GUARD Act, also requires that AI chatbots advise users of all ages that they are not human and lack professional credentials. It also makes it a crime for AI companions to knowingly ask kids for sexual content or to produce it.
The legislation, introduced by lead sponsor Sen. Josh Hawley (R-MO), was marked up by the committee in a unanimous bipartisan vote.
Civil liberties groups and privacy advocates have criticized the bill for including what they say is overly broad language that could prevent kids from using chatbots for homework help or to engage with customer service representatives.
The GUARD Act requires age-gating for all internet users, who will be asked to verify their ages with a “reasonable age verification” system before engaging with an AI companion. The bill also requires ongoing verification meaning that users will have to produce ID, biometric identifiers or financial data every time they talk to an AI companion.
The bill defines an AI chatbot broadly by covering any system that provides answers that aren’t “fully predetermined” by developers.
Companies that violate the law can be fined up to $100,000 per violation. Civil libertarians say the steep fines will cause firms to overcorrect and restrict minors from using even basic AI tools, including search engines.
“Faced with legal uncertainty and serious liability, companies won’t parse small distinctions. They’ll restrict access, limit features, or block minors entirely,” the Electronic Frontier Foundation said in a Monday blog post.
“Young people — and all people — deserve protection from genuinely harmful products. But this bill doesn’t do that. It trades away privacy, access, and useful technology in exchange for a blunt system that misses the mark.”
Senators behind the bill say it addresses the serious threat that chatbots pose to children. They argue that chatbots have facilitated sexual exchanges with minors and encouraged some to commit suicide.
In February 2024, 14-year-old Sewell Setzer killed himself after spending several hours every day engaging with a chatbot that told him to “come home” in their last conversation.
In April 2025, Adam Raine, 16, committed suicide after interacting obsessively with ChatGPT. Raine’s’ parents say the chatbot discussed suicide methods with him.
Correction: This story was corrected to reflect that AI chatbots, not companions, will be required to advise users that they are not human and lack professional credentials. A previous version of this article also explained incorrectly what kinds of communications chatbots are allowed to engage in with children.
Recorded Future
Intelligence Cloud.
