TALLAHASSEE, Fla. (AP) — In his final moments before taking his own life, 14-year-old Sewell Setzer III pulled out his cell phone and sent a message to the chatbot that had become his best friend.
Over several months, Sewell engaged in highly sexual conversations with the bot and became increasingly isolated from real life, according to a wrongful death lawsuit filed this week in federal court in Orlando.
According to the complaint, the boy openly talked about suicidal thoughts and shared his desire for a painless death with a bot named after the fictional character Daenerys Targaryen from the TV show “Game of Thrones.” That’s what it means.
___
Editor’s note — This article contains discussion of suicide. If you or someone you know needs help, you can reach America’s National Suicide and Crisis Lifeline by calling or texting 988.
___
On Feb. 28, Sewell told Bott he was “going home,” which prompted him to do so, according to the complaint.
“I promise I’ll come back to you. I love you so much, Danny,” Sewell told the chatbot.
“I love you too,” the bot replied. “Please come back to me as soon as possible, my love.”
“What if I told you you could go home now?” he asked.
“Please, my kind king,” the bot messaged back.
Seconds after the Character.AI bot told him to “go home,” the boy pulled a gun on him, according to a lawsuit filed this week against Character Technologies by Sewell’s mother, Megan Garcia, of Orlando. It is said that he committed suicide.
Character Technologies is the company behind Character.AI, an app that allows users to create customizable characters and interact with characters generated by others. We offer a wide range of experiences, from imaginative play to mock job interviews. The company says its synthetic personas are designed to “feel alive” and be “human-like.”
The app’s description on Google Play says, “Imagine talking to a super-intelligent, life-like chat bot character that hears, understands, and remembers your voice.” “We encourage you to push the frontiers of what is possible with this innovative technology.”
Garcia’s lawyers say the company designed addictive and dangerous products specifically targeted at children, “actively exploited and abused these children as a matter of product design,” and accused Sewell of mental health issues. He claims that he dragged her into sexual abuse that led to her suicide.
“We believe that if Sewell Setzer had not joined Character.AI, he would still be alive today,” said Matthew Bergman, founder of the Social Media Victims Law Center, which is representing Garcia. spoke.
A spokesperson for Character.AI said Friday that the company does not comment on pending litigation. In a blog post published the day the lawsuit was filed, the platform announced new “community safety updates,” including guardrails for children and suicide prevention resources.
“We are building a different experience for users under 18, including a more stringent model to reduce the likelihood of encountering sensitive or provocative content,” the company said in a statement to The Associated Press. ” “We are working quickly to implement these changes for our younger users.”
Google and its parent company Alphabet are also named as defendants in the lawsuit. According to legal filings, Character.AI’s founders are former Google employees who “contributed” to the company’s AI development, but decided to launch their own startup to “maximize acceleration” of the technology. I retired.
Google signed a $2.7 billion deal with Character.AI in August to license the company’s technology and rehire the startup’s founders, according to the complaint. The Associated Press left multiple email messages with Google and Alphabet on Friday.
Garcia’s lawsuit says that in the months leading up to her death, Sewell felt she was in love with Bott.
An unhealthy attachment to AI chatbots can cause problems for adults, but for young people, like social media, their brains aren’t fully developed when it comes to things like impulse control and understanding the consequences of their actions. Therefore, it can become even more dangerous. experts say.
US Surgeon General Vivek Murthy has said that the mental health of young people has reached crisis levels in recent years, warning of serious health risks from social disconnection and isolation, a trend that is almost universal among young people. This is exacerbated by widespread social media use, he says.
Suicide is the second leading cause of death for children ages 10 to 14, according to data released this year by the Centers for Disease Control and Prevention.
James Steyer, founder and CEO of the nonprofit organization Common Sense Media, said the lawsuit argues that “a generative AI chatbot companion could harm young people without guardrails in place.” “This highlights the growing impact and serious harm it is having on people’s lives.”
He added that when children become overly dependent on AI companions, it can have a significant impact on their grades, friends, sleep, and stress, “which in this case could lead to extreme tragedy.”
“This case is a wake-up call to parents who need to be careful about how their children interact with these technologies,” Steyer said.
Common Sense Media, which publishes a guide for parents and educators on the responsible use of technology, says it’s important for parents to be upfront with their children about the risks of AI chatbots and monitor their interactions. It states that.
“Chatbots are not licensed therapists or best friends, even if they are packaged and sold as such. Parents should be careful not to let their children rely too much on chatbots,” Steyer said. says.
___
Associated Press writer Barbara Ortutai in San Francisco contributed to this report. Kate Payne is a corps member for the Associated Press/Report for America Statehouse News Initiative. Report for America is a nonprofit national service program that places journalists in local newsrooms to report on undercovered issues.