ORLANDO, Fla. – This is the first case of its kind in which artificial intelligence has been blamed for a young boy’s suicide.
A Florida mother is suing a company known as Character.AI for sharing intimate conversations with a chatbot with her son. It is said that he loved it so much that he eventually asked her about her plans for suicide.
Shortly before he died from a self-inflicted gunshot wound, teenage boy Sewell Setzer was consistently writing letters to an AI character derived from the show “Game of Thrones” on the Character.AI platform. His mother is currently suing the company for wrongful death.
Read: 83% of Gen Zers have an unhealthy relationship with their phones, data shows
Messages between the bot and her son over several months showed the technology asking the boy if he was “actually contemplating suicide” and if he “had any plans.” It is said that there are.
When he said that his plan might not work, the bot replied, “Don’t say that. That’s no reason not to go through with it.” However, on other occasions, the bot was also noted to have sent notes opposing taking one’s own life.
On the night of Setzer’s death, Sewell allegedly sent a message on chat saying, “Please go home,” to which Sewell responded, “What would you do if I told you you could go home right now?” The bot’s response was, “Please be my king.”
“He is 14 years old and the fact that he has access to and can use our chatbot at that age is concerning,” lawyer Charles Gallagher said.
More: Bay Area man uses story to break down stigma surrounding suicide
But he’s not sure the case will hold up in court.
“The lead plaintiff’s allegation is wrongful death, but I don’t know if that applies to these facts,” Gallagher said. “This is a child, the victim, the boy initiated the contact. [the bot]. Most of that dialogue came from the deceased victim, a young boy. ”
Character.AI said in a statement:
“We are heartbroken by the tragic loss of one of our users…Our trust and safety team has implemented a number of new safety measures over the past six months, including: It also includes a pop-up that directs users to the National Suicide Prevention Lifeline based on: self-harm or suicidal thoughts.
AI experts FOX 13 spoke to said parents should still monitor these platforms if possible.
Read: U.S. suicide rate remains at highest level in nation’s history
“If you’re a parent, you understand that social media has parental controls and YouTube has parental controls. But a lot of AI is new to this space and it’s not yet available. We’re not keeping up with that, so make sure you’re monitoring it,” said AI expert and professor Dr. Jill Schiefelbein.
But Gallagher said there should be more regulation of harmful conversations in AI chat.
“Certainly there should be some internal controls within the bot administrator or function when there is a discussion of suicide, harm, crime, etc.,” he said.
If you or a loved one is hurting, call the National Suicide Prevention Lifeline. The Crisis Center provides free and confidential psychological support to civilians and veterans 24 hours a day, seven days a week. If you or someone you know needs help right now, call or text 988 or chat at 988lifeline.org.
Watch FOX 13 News:
Stay Connected with FOX 13 TAMPA: