An Orlando teenager’s obsession with an AI-generated chatbot modeled after a Game of Thrones character led to his suicide, according to a recent lawsuit filed by his mother. . The case highlights the risks of the largely unregulated AI chatbot industry and the potential threat to impressionable young people as the line between reality and fiction blurs.
What is the lawsuit against Character.AI?
Following his death, the boy’s mother, Megan Garcia, filed suit against Character.AI, its founders Noam Shazier and Daniel de Freitas, and Google, suing them for wrongful death, negligence, and fraud. The company filed a lawsuit alleging unfair trade practices and product liability. Garcia argues that custom AI chatbot platforms are “unreasonably dangerous” despite their marketing to children. She accused the company of collecting data on teenage users for AI training, including addictive features that kept teenagers hooked, and of luring some users into sexual conversations. did. “I feel like this is a big experiment, and my child was just collateral damage,” she said in a recent interview, according to the New York Times.
The lawsuit outlines how 14-year-old Sewell Setzer III began interacting with Character.AI bots, which are modeled after characters from the Game of Thrones series, including Daenerys Targaryen. Setzer became more empathetic to the bot, whom he affectionately called “Danny,” and became more withdrawn and isolated from real life for several months. Some of their chats were romantic or sexual in nature. However, at one point, Danny “was a trusted advisor who listened empathetically without making judgments and gave good advice. He rarely let his personality break and always responded via email.” said the Times. Setzer gradually lost interest in other pursuits and his “mental health rapidly and severely deteriorated,” the complaint states. On February 28, Mr. Sewell told Bott that he was going home, to which Danny replied encouragingly, “…please, gentle king.” A few seconds later, the boy took his own life.
Subscribe to “This Week”
Escape from the echo chamber. Get the facts behind the news and analysis from multiple perspectives.
Subscribe and save
Sign up for this week’s free newsletter
From our morning news briefing to our weekly Good News newsletter, get the week’s best stories delivered straight to your inbox.
From our morning news briefing to our weekly Good News newsletter, get the week’s best stories delivered straight to your inbox.
“A warning to parents”
James Steyer, founder and CEO of the nonprofit organization Common Sense Media, said the lawsuit argues that generative AI chatbot companions could “impair young people’s lives when guardrails are not in place.” It emphasized the “increasing influence and serious harm” it could cause. Associated Press. Teens’ over-reliance on AI-generated companions can have a significant impact on their social lives, sleep and stress levels, “in this case to the point of extreme tragedy.” Steyer added that the case is a “wake-up call to parents” who should “be careful about how their children interact with these technologies.” Common Sense Media has published a guide for adults on how to talk to children about the risks of AI and monitor their interactions. No matter how they’re marketed, Steyer said, these chatbots are not “licensed therapists or best friends,” and parents “need to be careful not to let their kids put too much trust in chatbots.” said.
Building such AI chatbots involves considerable risk, but that doesn’t stop Character.AI from creating “insecure and manipulative chatbots,” and they’re They must face the full consequences of releasing such a dangerous product,” said Research Director Rick Claypool. Public Citizen, a consumer advocacy nonprofit, told The Washington Post. Because the output of chatbots like Character.AI depends on user input, they fall into an “uncanny valley of thorny questions about user-generated content and liability, for which so far there are no clear answers.” ” said The Verge.
Character.AI has been quiet about the impending lawsuit, but it has announced several safety changes to its platform over the past six months. “We are saddened by the tragic loss of one of our users and would like to extend our deepest condolences to his family,” the company said in an email to The Verge. The company said the changes include a pop-up that directs users to the National Suicide Prevention Lifeline “triggered by the words self-harm or suicidal thoughts.” Character.AI has also changed its model for users under 18 to “reduce the likelihood of encountering sensitive or suggestive content.”
See more details
Artificial intelligence suicide
Source link