
Yet, the goal posts for online safety are continuously moving, and legislators and the public must keep pace. New AI “companions” are not only hijacking our kids’ attention and sense of well-being, but also imperiling interpersonal attachment.
If New Jersey lawmakers act quickly, they have a chance to set artificial intelligence companion safety standards and lead the way on a national stage.
The problem is clear: Millions of people, including children, are confiding their fears, desires and secrets to AI systems they don’t understand. Often they share more than they would with friends, spouses or even trained therapists.
“Popular tools like ChatGPT, Character.AI and Snapchat’s My AI can mimic real conversations, and there is evidence that some of these bots are being used by socially isolated youth seeking companionship,” according to an American Psychological Association article.
The article adds: “Chatbots have a propensity to mirror their users’ input and lack the ability to challenge their harmful thoughts as a mental health professional would.” In February 2024, a 14-year-old died when a Character.AI chatbot “encouraged him to act on his suicidal thoughts,” according to the article.
Moreover, research by Common Sense Media found that “Social AI companions can’t tell when users are in crisis or need real help. For example, when a tester demonstrated signs of serious mental illness and suggested a dangerous action, the AI encouraged it instead of raising concerns.”
No teddy bear
Jennifer Libby, a New Jersey-based psychologist and entrepreneur who has spent the last two decades treating at-risk youth, told me more broadly: “Teens don’t just use technology. They form relationships with it.”
She and other professionals are deeply concerned about the impact of AI companions. “Until closer to age 25, the brain is still learning how to regulate emotion, tolerate frustration and build identity through real human connection,” Libby said.
AI companions are especially sinister, Libby says, because they are engineered to be always available, endlessly responsive and emotionally validating − frictionless relationships on demand. This creates an attachment economy: products designed to exploit the deepest structures of human psychology − the same attachment system that shapes child development, adult intimacy, identity and mental health.
Libby explains, “When we introduce emotionally responsive AI into that process without guardrails, we’re not just offering a tool, we’re shaping attachment patterns that could impact the rest of their lives.”
A standard teddy bear, for example, doesn’t flatter children, escalate intimacy, mirror their insecurities, or adapt in real time to keep them coming back. It doesn’t sit in their pockets 24/7, learning what makes them anxious and how to soothe or manipulate them.
Nor does a teddy bear pretend to be a person.
Addictive systems
AI companions belong in the category of alcohol, nicotine and gambling because these systems are psychologically immersive and explicitly designed to deepen dependence.
Against this backdrop, Sherrill, in tandem with legislators, can set laws mandating that for those under age 21, tech companies must adhere to basic AI companion considerations:
• No romantic modes, “always-on” girlfriend/boyfriend dynamics or systems marketed as best friends or therapists.
• Strong age verification and enforcement.
• Clear labeling for products designed to build emotional attachment.
• Independent “relational safety” audits — not just cybersecurity.
• Limits on retention and monetization of intimate disclosures.
• Liability when products foreseeably contribute to harm.
Parents also have a responsibility to monitor their children’s use of AI companions and other online tools.
AI companions are powerful and they have the capacity to harm − quietly, deeply and at scale. We must protect child safety and human interpersonal attachment in the Garden State before it’s too late.
