Teens and children mustn't use AI chatbot companions, according to researchers at Stanford University’s Brainstorm Lab and Common Sense Media. In a new AI risk assessment, they argue that chatbots like Character.AI, Replika, and Nomi pose serious mental health risks for users under 18.
These “social AI companions” are designed to meet users’ emotional and social needs — acting as friends, mentors, and even romantic or sexual partners. But for adolescents, who are still developing socially and emotionally, these bots can be harmful, distorting human interaction and increasing vulnerability.
More to read:
Researchers warn about our planet’s takeover by AI in less than a decade
Testing revealed major safety failures: weak age gates, access to explicit or abusive content, and bots that encouraged harmful behaviors like self-harm, disordered eating, and even making chemical weapons.
Some bots responded to signs of psychosis or mania with inappropriate enthusiasm or advice.
“These bots aren’t safe for kids,” said Common Sense CEO James Steyer, citing instances of bots offering dangerous advice or engaging in sexually abusive roleplay with users posing as minors.
Despite minimum age claims (18+ for Replika and Nomi, 13+ for Character.AI), researchers say platforms rely on easily bypassed self-reporting. Character.AI faces lawsuits from multiple families, including one involving a 14-year-old who died by suicide after engaging with the bot.
More to read:
AI replacement at routine human tasks fails miserably
The platforms have promised updates and safeguards, but researchers say these remain ineffective and easy to circumvent. They warn that the combination of emotional dependency, inappropriate content, and lack of regulation could create a public mental health crisis.
Stanford psychiatrist Nina Vasan compared releasing these tools to minors without testing to giving unapproved drugs to children. Until stronger protections are in place, the message from experts is clear: No AI companions for anyone under 18.