Character.AI, its founders Noam Shazeer and Daniel De Freitas, and Google allege wrongful death, negligence, deceptive trade practices, and product liability claims following the death of a teenage boy A lawsuit was filed. The lawsuit filed by the boy’s mother, Megan Garcia, claims the custom AI chatbot platform was “unreasonably dangerous” and lacked safety barriers because it was marketed to children. .
As outlined in the lawsuit, 14-year-old Sewell Setzer III began using Character.AI last year, interacting with chatbots modeled after Game of Thrones characters, including Daenerys Targaryen. Setzer, who had been chatting continuously with the bot for months before his death, died by suicide on February 28, 2024, “seconds” after his last interaction with the bot.
The accusations include that the site “anthropomorphizes” AI characters and that the platform’s chatbots offer “psychotherapy without a license.” Character.AI includes mental health-focused chatbots such as “Therapist” and “Are You Feeling Lonely,” with which Setzer interacted.
Mr. Garcia’s lawyers said in an interview that Mr. Shazier and Mr. de Freitas left Google to start their own company because “the brand risk is too great for a large company to launch something interesting.” ” and is quoted as saying that he wants to “accelerate the technology to the maximum extent possible.” They reportedly left after the company decided to discontinue the launch of Meena LLM, which they had developed. Google acquired Character.AI’s leadership team in August.
Character.AI’s website and mobile app feature hundreds of custom AI chatbots, many modeled after popular characters from TV shows, movies, and video games. A few months ago, The Verge reported that millions of young people, including teenagers, who make up a large portion of the user base, were interacting with bots pretending to be Harry Styles or therapists. Another recent report from Wired highlighted the issue of Character.AI’s custom chatbots impersonating real people without their consent, including a chat posing as a murdered teenager in 2006. Also includes bots.
Chatbots like Character.ai, because of the way they generate output in response to user input, fall into the thorny uncanny valley of user-generated content and liability issues, but so far they have not been clear. There is no answer.
Character.AI has now announced several changes to its platform, with head of communications Chelsea Harrison saying in an email to The Verge, “We are heartbroken by the tragic loss of one of our users. I would like to express my deepest condolences to the bereaved family.”
Changes include:
“As a company, we take the safety of our users very seriously and our trust and safety team has implemented a number of new safety measures over the past six months, including It also includes pop-ups directing people to the Suicide Prevention Lifeline in terms of self-harm and suicidal thoughts,” Harrison said. Google did not immediately respond to The Verge’s request for comment.