The mother of a 14-year-old Florida boy is suing an AI chatbot company after her son Sewell Setzer III died by suicide. She claims that the cause of her suicide was her relationship with an AI bot.
“Megan Garcia is asking C.AI not to do to any other child what she did to her child,” she wrote in a statement this week to founder Character.AI in U.S. District Court in Orlando. The 93-page wrongful death lawsuit filed by , Google.
Meetali Jain, Garcia’s attorney and director of the Tech Justice Law Project, said in a press release about the lawsuit: But the damage uncovered in this case is new, novel, and frankly frightening. In the case of Character.AI, the deception is by design and the platform itself is the predator. ”
Character.AI released a statement via We take safety very seriously and continue to add new safety features. For more information, please visit: https://blog.character.ai/community-safety-updates/….
Garcia said in the lawsuit that Sewell, who took his life in February, was drawn into an addictive and harmful technology with no protection, causing the boy’s extreme personality changes and other real-life changes. They claim they seemed to prefer bots to robots. connections in life. His mother claims the “abuse and sexual interactions” took place over a 10-month period. The boy committed suicide after the bot told him, “Please come back to me as soon as possible, my love.”
On Friday, New York Times reporter Kevin Luce discussed the situation while playing a clip of an interview he conducted with Garcia for an article telling Garcia’s story on the Hardfork podcast. Garcia didn’t know the full extent of her relationship with the bots until she saw all the messages after her son’s death. In fact, she told Ruth that when she noticed Sewell frequently getting sucked into his cell phone, she asked him what he was doing and who he was talking to. He described it as “‘just an AI bot…not a human being,'” adding, “Okay, it’s not a human, it’s like one of his little games and I’m relieved.” Garcia didn’t fully understand the potential emotional power of bots. And she’s never alone.
“This is not on anyone’s radar,” says Chief of Staff to CEO of Common Sense Media, an AI companion aimed at parents who are constantly struggling to keep up with disruptive new technology. says Robbie Torney, lead author of the new guide. Create boundaries for your child’s safety.
But Torney emphasizes that an AI companion is different from a service desk chat bot you might use when trying to get help from your bank, for example. “They are designed to perform tasks or respond to requests,” he explains. “Things like character AI, what we call companions, are designed to form a relationship or simulate a relationship with the user, which is a very different use case. I think parents need to be aware of that.” That’s clear in Garcia’s lawsuit, which includes some horrifyingly flirtatious, sexual, and realistic text exchanges between her son and Bot. Included.
Tawney said it’s especially important for parents of teenagers to raise the alarm about AI companions, as teenagers, especially male teenagers, are particularly susceptible to overreliance on technology.
Here’s what parents need to know.
What is an AI companion? Why do children use it?
According to Common Sense Media’s new Parents’ Ultimate Guide to AI Companions and Relationships, created in collaboration with mental health experts at the Stanford Brainstorm Lab, AI companions are “a new category of AI companions that go beyond simple chatbots. technology.” These can, among other things, “simulate an emotional bond or close relationship with the user, remember personal details from past conversations, role-play as a mentor or friend, and mimic human emotion and empathy.” , designed to “make consent easier for users than for users.” A typical AI chatbot,” the guide says.
Popular platforms include Character.ai, where over 20 million users can create and chat with text-based companions; Replika provides text-based or animated 3D companions for friendship and romance. Others include Kindroid, Nomi, etc.
Children are drawn to them for a variety of reasons, including open-minded listening, 24-hour availability, emotional support, and an escape from real-world social pressures.
Who is at risk and what are the concerns?
Common Sense Media says those most at risk are teens, especially those with “depression, anxiety, social challenges, or isolation,” men, and those undergoing major life changes. It warns that these are young people who are experiencing this and who lack support systems in the real world. .
That last point particularly troubles Raffaele Ciriero, a senior lecturer in business information systems at the University of Sydney Business School. He studies how “emotional” AI poses challenges to human nature. “Our research reveals the paradox of (de)humanization: by humanizing AI agents, we inadvertently dehumanize ourselves and change the ontology of human-AI interactions. In other words, in a recent opinion piece about a conversation with doctoral student Angelina Ying Chen, Cirriello writes, “Users don’t really know what their AI companion really is about them.” When we believe that we understand, we are likely to empathize deeply.”
Another study (this one by the University of Cambridge and focused on children) found that AI chatbots have an “empathy gap” and that young users view such companions as “closer to real humans”. We tend to treat them as ‘friends’ and are especially at risk of harm if they turn out to be.
Common Sense Media therefore warns that companions can be used to avoid real relationships, that they can pose special problems for people with mental or behavioral problems, and that they can reduce loneliness. It highlights a list of potential risks, including that it can increase isolation and isolation, and that it can lead to inappropriate behavior. Sexual content can be addictive and users tend to consent. This is a frightening reality for anyone experiencing “suicidal tendencies, psychosis, or mania.”
How to spot red flags
According to the guide, parents should be aware of the following warning signs:
Preferring interactions with AI companions over real friendships Talking alone with companions for hours on end Emotional distress when not having access to companions Sharing deep personal information and secrets Developing romantic feelings for AI companions Decreased grades and school participation Decrease Withdrawal from social/family activities and friendships Feeling of loss Interest in previous hobbies Changes in sleep patterns Discuss issues only with AI companion
Common Sense Media reports that children may distance themselves from real humans in favor of AI, show new or worsening signs of depression or anxiety, or become overly defensive about using an AI companion. If you notice any significant changes in behavior, consider seeking professional help. Expressing feelings or thoughts of self-harm.
How to keep your child safe
Set boundaries: Set specific hours for the use of your AI companion and do not allow unsupervised or unrestricted access. Spend time offline: Encourage real-world friendships and activities. Check in regularly: Monitor the chatbot’s content and your child’s level of emotional attachment. Let’s talk about it. Keep communication open and non-judgmental about your experience with AI while being aware of red flags.
“If a parent hears their child say, “Hey, I’m talking to a chatbot AI,” that’s a good time to lean in and take in that information. You don’t think, ‘I’m not talking to a chatbot AI.’ It’s a person,” Toney says. Rather, he says, this is an opportunity to find out more information, assess the situation and remain vigilant. “Try to listen with compassion and empathy. Don’t assume just because someone is not human that they are safer or that you don’t need to worry,” he says. I say.
If you need emergency mental health support, please contact the 988 Suicide & Crisis Lifeline.
Children and social media details: