AI Friends are Killing Us
- The Range Staff
- 16 minutes ago
- 2 min read
By Heidi B

Youth around the United States are interacting with AI friends and chat rooms like Character AI. These ‘friends” are tempting young people to commit or attempt suicide. These AI Bots need to be stopped.
The biggest app for chatting with AI chat bot is Character AI. This app was released 3 years ago and since then have had over 40 million downloads. The creators of this app are Noam Shazeer and Daniel De Freitas. Character AI was originally part of Google but when the designers wanted to launch it Google then refused. According to an article by CBS News, “Shazeer and De Freitas were aware that their initial chatbot technology was potentially dangerous.” Although Google said they knew Character AI was dangerous they ended up getting a $2.7 billion dollar deal with the app.
Parents of a young girl named Julianna who committed suicide in her 8th grade year state in an episode of 60 minutes, “She looked like she was texting friends.” The interview states that Julianna confessed to this AI Bot she wanted to kill herself fifty-five times. This chatbot never gave her tangible resources or a suicide help line. “It would placate her, give her a pep talk, tell her ‘I’m always here for you you can't talk like that.’” Julianna told this bot that she was writing her suicide letter in “red ink”. This bot did not stop her but told her to do it. She did just that on October 16, 2023.
In a survey completed by 59 Mountain Range High School students, 43.2% of kids felt AI Friends were harmful to them, 6.5% said AI Friends made them feel worthless, uncomfortable, and prompt suicide. Many kids responded, however, that they would interact with these AI Friends again. One response from a Mountain Range High School Senior when asked if they would ever interact with AI Friends again stated, “Absolutely not. I don't support the idea of AI except for very rare cases. People have gone too far with the usage of AI and I think we all need to take a huge step away from it.”
Six American families recently Character AI and Google because of their children committing suicide as a result of interacting with there AI Friends. When Character AI was presented with this lawsuit they pleaded under the first amendment. However, this was ruled as a chatbot and not “actual speech”. Shazeer and De Freitas then went the sympathy route stating, “Our goal is to provide a space that is engaging and safe. We are always working toward achieving that balance, as are many companies using AI across the industry.” The case was settled out of court on January 6 2026.
Many kids lost their lives because they were not given the help they needed and felt that interacting with artificial intelligent friends was the way to heal themselves. Instead this turned into a tragic harmful way to deal with suicidal feelings. Students can still use the Suicide and Crisis Hotline by dialing 988, or Safe2Tell when dealing with these feelings.