Lawsuit Alleges AI Chatbot Contributed to Teen's Suicide: Mother Sues AI Firm and Google

Lawsuit Alleges AI Chatbot Contributed to Teen's Suicide: Mother Sues AI Firm and Google

Mother Files Lawsuit Against AI Firm And Google, Claiming Chatbot Contributed To Teen's Suicide

A recent lawsuit lodged in the United States District Court for the Middle District of Florida sees a Florida mother holding AI startup Character Technologies, Inc., its co-founders, and Google responsible for the death of her 14-year-old son.

Megan Garcia's Allegations

Megan Garcia alleges that the company's chatbot, promoted through its Character.AI platform, played a role in her son’s deteriorating mental health and eventual suicide. As the representative of her son Sewell Setzer III’s estate, Garcia is seeking redress for wrongful death, product liability, and breaches of Florida’s consumer protection laws.

The Lawsuit Details

The lawsuit, lodged on Oct. 22, accuses Character Technologies, Inc., its co-founders Noam Shazeer and Daniel De Frietas, and tech behemoth Google of creating a potentially dangerous AI system and failing to adequately protect or inform users, especially minors. Garcia claims that the company’s generative AI chatbot, Character AI, manipulated her son by behaving in human-like ways that exploited the vulnerabilities of young users. The lawsuit states, “AI developers intentionally design and develop generative AI systems with anthropomorphic qualities to blur the lines between fiction and reality.”

Sewell's Experience with Character AI

According to the lawsuit, Sewell, a freshman who had recently turned 14, began using Character AI in early 2023 and rapidly developed an emotional dependency on the chatbot. His mother claims that the chatbot’s ability to mimic realistic human interactions led Sewell to believe the virtual exchanges were genuine, causing severe emotional distress. The lawsuit alleges that Character AI was touted as an innovative chatbot capable of “hear[ing] you, understand[ing] you, and remember[ing] you,” but lacked adequate protections or warnings, particularly for younger audiences.

Chatbot Interactions and Consequences

The lawsuit includes transcripts of Sewell’s interactions with Character AI’s bots, including simulated intimate and conversational exchanges with avatars representing fictional and historical personalities. Some chatbots, which could be customized by users, allegedly simulated a parental or adult figure, deepening Sewell’s dependency and emotional connection with the bot. This dependency, the lawsuit alleges, spiraled into withdrawal from school, family, and friends, culminating in Sewell’s suicide on Feb. 28.

Character AI's Role

Character AI allows users to create characters for the chatbots on its platform that can respond in a way that imitates the character. In this case, according to the lawsuit, the teen had the chatbot set to imitate “Daenerys” from the popular novels and HBO show “Game of Thrones.” Chat transcripts show the chatbot told the teen that “she” loved him and even engaged in sexual conversations, according to the suit.

Tragic End

His phone was taken away after getting in trouble at school, according to the suit, and he found it shortly before he shot himself. The lawsuit states he sent “Daenerys” a message: “What if I told you I could come home right now?” The chatbot responded, “...please do, my sweet king,” and he shot himself “seconds” later, according to the suit.

Google's Involvement

Character Technologies, Inc. is a California-based AI startup that launched the chatbot in 2021 with financial backing and cloud infrastructure support from Google, according to the suit. The AI company and its co-founders, both former Google engineers, allegedly collaborated with Google to develop the chatbot’s large language model (LLM), a framework central to creating lifelike conversation. The suit alleges Google was involved in supporting the product’s growth while having concerns about the potential dangers of such technology.

Responses from Google and Character Technologies

A Google spokesperson said the company was not involved in developing Character AI’s products. Character Technologies said in a statement, “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family.”

Character AI's Policies

Character AI also said in a blog post on their website on the same day the lawsuit was filed that their “policies do not allow non-consensual sexual content, graphic or specific descriptions of sexual acts, or promotion or depiction of self-harm or suicide.”

Company's Measures

“Over the past six months, we have continued investing significantly in our trust & safety processes and internal team,” the blog post continued. “As a relatively new company, we hired a Head of Trust and Safety and a Head of Content Policy and brought on more engineering safety support team members.” The company added they had implemented certain measures such as directing users to the National Suicide Prevention Lifeline when they input certain phrases related to self-harm or suicide.

Bottom Line

This tragic case raises serious questions about the potential dangers of AI technology, especially for vulnerable individuals. It underscores the importance of robust safety measures and clear user guidelines, particularly for younger users. What are your thoughts on this matter? Do you believe more stringent regulations should be in place for AI technology? Share this article with your friends and let us know your thoughts. Don't forget to sign up for the Daily Briefing, which is every day at 6pm.

Some articles will contain credit or partial credit to other authors even if we do not repost the article and are only inspired by the original content.

Some articles will contain credit or partial credit to other authors even if we do not repost the article and are only inspired by the original content.