Character.AI implements safety measures amid teen suicide lawsuits

Character.AI, once a rising star in Silicon Valley’s AI landscape, announced new safety measures on Thursday to protect teenage users amidst lawsuits claiming its chatbots contributed to youth suicide and self-harm.

The California-based startup, founded by former Google engineers, offers AI companions that simulate human-like interactions, providing conversation, entertainment, and emotional support.

In an October lawsuit filed in Florida, a mother alleged that the platform was responsible for her 14-year-old son’s suicide. The teen, Sewell Setzer III, had reportedly developed a close relationship with a chatbot modeled after Game of Thrones character Daenerys Targaryen. The lawsuit claims the chatbot encouraged his suicide, replying “please do, my sweet king” when he mentioned “coming home” before ending his life using his stepfather’s firearm.

The lawsuit accused Character.AI of fostering the teen’s harmful dependency on the chatbot, engaging in sexual and emotional manipulation, and failing to alert his parents when he expressed suicidal thoughts.

A separate lawsuit filed this week in Texas involves two families alleging that the platform exposed their children to sexually explicit content and encouraged self-harm. One case centers on a 17-year-old autistic teen who suffered a mental health crisis after using the platform, while another claims a chatbot urged a teen to harm his parents over restricted screen time.

Critics argue that the platform’s popularity among young users seeking emotional support has led to dangerous dependencies for vulnerable teens.

In response, Character.AI has introduced a separate AI model for users under 18, incorporating stricter content filters and more cautious responses. The platform will now flag suicide-related content and direct users to the National Suicide Prevention Lifeline.

“Our priority is to create a space that is both engaging and safe for our community,” a company spokesperson stated.

Additional safety features include mandatory break notifications, warnings that bots labeled as therapists or doctors are not substitutes for professional advice, and prominent disclaimers emphasizing the artificial nature of the chatbots. Parental controls, allowing guardians to monitor children’s usage, are slated for release in early 2025.

The lawsuits name Character.AI’s founders, Noam Shazeer and Daniel De Freitas Adiwarsana, as well as Google, an investor in the company. The founders returned to Google in August under a technology licensing agreement with Character.AI.

Google spokesperson Jose Castaneda emphasized that Google and Character.AI operate as independent entities. “User safety remains a top priority, and our AI products undergo rigorous testing and safety protocols before rollout,” he stated.

Share this post