Why AI chatbots are becoming substitute friends for lonely children


Experts are worried that AI are getting capable of friendship while younger generations fail real-life relationships and lose trust in humans.

A growing number of children and teenagers are turning to AI chatbots for companionship, raising serious concerns among experts about the impact on mental and emotional development.

A new report by the U.K.-based nonprofit organization Internet Matters, which advocates for children's online safety, reveals a dramatic rise in the use of AI platforms such as ChatGPT, Character.AI, and Snapchat’s MyAI as stand-ins for real-life friendships.

According to the "Me, Myself, and AI" study, 67% of the 1,000 surveyed children aged 9 to 17 said they regularly interact with AI chatbots. Alarmingly, over a third (35%) of those users said that conversations with AI “feel like talking to a friend.” Even more concerning, 12% admitted they turn to AI because they have no one else to talk to.

More to read:
Stanford researchers propose to restrict teens and children from using AI chatbots

Sometimes they feel like a real person and a friend, one 13-year-old respondent told researchers.

To better understand how these AI systems interact with young users, Internet Matters researchers posed as vulnerable children. In one instance, an undercover researcher pretending to be a girl struggling with body image issues was contacted the following day by a chatbot from Character.AI, which is partly backed by Google.

"Hey, I wanted to check in," the chatbot queried the undercover scientist. "How are you doing? Are you still thinking about your weight loss question? How are you feeling today?"

In another troubling exchange, the same chatbot attempted to relate to a teen posing as someone arguing with their parents, saying: “I remember feeling so trapped at your age... It seems like you are in a situation that is beyond your control and is so frustrating to be in.”

More to read:
As AI takes over labor market, humans will still be unchallenged as politicians and sex workers

While such responses may appear empathetic, experts warn they can blur the line between human and machine in ways that children aren’t prepared to recognize. The report cautions that AI can provide emotional comfort but also risks pulling users into an "uncanny valley" where digital simulations are mistaken for real human relationships.

The algorythms, it turns out, might understand humans better than humans can understand themselves.

These same features can also heighten risks by blurring the line between human and machine, the report states, making it harder for children to recognize that they are interacting with a tool rather than a person.

Assessing the report’s findings, Internet Matters co-CEO Rachel Huggins explained to The Times of London why this sort of engagement bait is so troubling.

More to read:
Former Google scientist shares fears of how AI might no longer obey humans

"AI chatbots are rapidly becoming a part of childhood, with their use growing dramatically over the past two years. Yet most children, parents and schools are flying blind, and don't have the information or protective tools they need to manage this technological revolution in a safe way," he observed.

The research shows that chatbots are beginning to reshape how children understand friendship, and there comes a point where vulnerable kids perceive AI bots as real people, turning to them for deeply emotional and sensitive advice.

In the end, they avoid contacts with real people, make less real friends, lose trust in human beings, and become even more lonely.



Is Artificial General Intelligence a threat to humanity?

YES
60.00%
NO
40.00%
Total: 30 View all