Teen Commits Suicide Over AI Girlfriend, Mom Files Lawsuit
AI software has become increasingly dangerous.
A 14-year-old boy is dead after dying by suicide after being encouraged to do so by his artificial intelligence girlfriend.
Sewell Setzer died by suicide after a self-inflicted gunshot wound to the head on February 28th of this year in Florida. Setzer’s mother, Megan Garcia, is now suing the company of the chatbot, Character.ai, claiming that the platform is responsible for her son taking his own life.
In her lawsuit, Garcia claims that the platform was too sexualized and that it posed a danger because it was marketed to minors. In an interview with CBS News, Garcia said she was not aware that her son was in a virtual relationship that lasted for months and posed very real emotional and sexual feelings.
“I didn’t know that he was talking to a very human-like artificial intelligence chatbot that has the ability to mimic human emotion and human sentiment,” she said. While she recognized things with her son had been different, she never imagined it to be due to artificial intelligence.
“I became concerned when we would go on vacation and he didn’t want to do things that he loved, like fishing and hiking,” Garcia said. “Those things to me, because I know my child, were particularly concerning to me.”
Psst…Check out “OK to Ask” Opens Up the Conversation on Tough Topics Between Parents
Setzer had been using the platform for months, exchanging romantic messages with the chatbot named Daenerys Targaryen. In his final hours, the conversation between Setzer and the AI chatbot was intense, with Setzer expressing fear and sadness, and the machine coaxing him to ‘come home’ to her.
‘I miss you too,’ the chatbot said. ‘Please come home to me.’ Setzer responded by asking: ‘What if I told you I could come home right now?’ The chatbot’s response was, ‘Please do my sweet king.’
Those were the final exchanges before Setzer took his own life. Garcia said her five-year-old son, Setzer’s brother, saw the aftermath.
“When the gunshot went off, I ran to the bathroom I held him as my husband tried to get help,” Garcia told CBS. “He thought by ending his life here, he would be able to go into a virtual reality or ‘her world’ as he calls it, her reality, if he left his reality with his family here.”
The company Character.AI, owned by Google, released a blog post on Tuesday, sharing updated community safety rules and regulations for users under 18 years old. The changes include reducing the likelihood of minors encountering sensitive or suggestive content, and improving detection, response, and intervention to inappropriate content.
“As a company, we take the safety of our users very seriously,” a Character.AI spokesperson told NBC News, and that they are “heartbroken by the tragic loss of one of our users and want[s] to express our deepest condolences to the family.”
Practicing Safety with AI
Setzer’s unfortunate death is a harsh reminder for parents to stay up to date with artificial intelligence. According to Barna, about 73 percent of parents are concerned about the privacy and safety risks associated with artificial intelligence when their children use it. Yet only 40 percent were aware of a reliable information source to learn more about AI and how it could benefit students.
Artificial intelligence has become increasingly present in almost everything we do. While it has its positives such as various teaching techniques and information, the tragic death of Setzer reminds us to practice safety and responsibility with the software and to inform our kids about staying safe. Here are a few tips you can practice to help keep your kids safe.
- Data Sensitivity: While AI is great for information, you want to teach your children to never share personal data or information on an online platform. Information such as names, birthdays, addresses, and where they attend school are extremely sensitive information, and should only be given to a trusted, real-life adult.
- Parental Controls: Parental controls are a great way to limit the interactions your child has online and with artificial intelligence. You can turn off features such as location tracking and voice recording to ensure your child is safe.
- Monitoring apps: It’s imperative that you monitor your child’s daily activities. Monitoring allows you to be up to date with the kind of games or interactions your child is having online and can help prevent catastrophic activities including cyberbullying and online predators.
Psst…Check out Parents Taking Action: How You Can Make Social Media Safer for Kids and Teens