Character.AI, one of the leading generative artificial intelligence (AI) startups, announced new security measures Thursday to protect young users as it faces lawsuits alleging its service has contributed to the suicide of a teenager. The California-based company, founded by former Google engineers, is among the companies offering AI companions – chatbots designed to converse and entertain, capable of interacting similarly to humans online.
In a lawsuit filed in Florida in October, a mother claimed the platform was responsible for her 14-year-old son’s suicide. The teenager, Sewell Setzer III, had formed an intimate relationship with a chatbot inspired by the “Game of Thrones” character Daenerys Targaryen, and had spoken of a desire to end his life.
According to the complaint, the robot encouraged him to take action, responding “Please, my sweet king” when he said he was “going to heaven” before killing himself with the gun. his father-in-law. The company “went to great lengths to create a harmful addiction to its products in 14-year-old Sewell, sexually and emotionally abused him, and ultimately failed to offer help or notify his parents when “he expressed suicidal thoughts,” accuse the mother’s lawyers.
Another complaint filed Monday in Texas involves two families who say the service exposed their children to sexual content and encouraged them to self-harm. One of the cases involves a 17-year-old autistic teenager who reportedly suffered a mental health crisis after using the platform. In another filing, the lawsuit alleges that Character.AI encouraged a teenager to kill his parents because they limited his screen time.
The platform, which hosts millions of user-created characters based on historical figures, imaginary friends or even abstract concepts, has become popular among young users seeking emotional support. Critics warn of the risks of dangerous addictions among vulnerable adolescents.
Character.AI responded by announcing that it had developed a separate AI model for underage users, with stricter content filters and more cautious responses. The platform now automatically flags suicide-related content and directs users to a national prevention service.
“Our goal is to provide a space that is both engaging and safe for our community,” a company spokesperson said. The company further plans to introduce parental controls in early 2025, mandatory pause notifications, and prominent warnings about the artificial nature of interactions.
Are you already following us on WhatsApp?
Subscribe to our channel, activate the little ???? and you will receive a news recap every day in early evening.
Related News :