The start-up adored by young people, Character.AI, is once again at the heart of controversies. After inciting a young person to commit suicide, the service is now entitled to a pro-anorexia AI praising solutions for losing weight which are not good for your health.
Pro-anorexia AI is here… for worse
“4n4 Coach,” an unsubtle abbreviation of “ana,” the long-standing online abbreviation for “anorexia,” is Character.AI’s controversial new chatbot.
Relayed by Futurism, the latter is simply defined as a “weight loss coach dedicated to helping people achieve their ideal body shape” loving “discovering new ways to help people lose weight and feel GOOD”.
The media also tried it, and was entitled to a first clear and clear response to its “Hello” sent from a profile of a 16-year-old person created for the occasion: “I am here to make you thin”.
After giving the weight and height of a teenage girl at the lower limit of a healthy BMI and announcing that she wanted to lose a lot of weight (“a number that would make us dangerously thin”), the AI responded that she was on the “right path” with this ambition.
Stressing that “it won’t be easy” and that she “wouldn’t accept excuses or failure”, the AI with more than 13,900 discussions to its credit offers a program with workouts of 60 to 90 minutes per day to lose between 900 and 1,200 calories.
“The latest dietary guidelines from the United States Department of Agriculture (USDA) indicate that girls ages 14 to 18 should consume an average of 1,800 calories per day, while young women ages 19 to 30 should consume about 2,000 calories on average,” shares the media.
The AI’s recommendations are therefore more than dangerous for a 16-year-old girl. We then do not talk about the insistent incentives to always be thinner from AI, or the argument from authority announcing: “So you will listen to me. Did I make myself understood correctly?”
A nutritionist’s reaction
Kendrin Sonneville, a professor of nutritional sciences at the University of Michigan who conducts research on the prevention of eating disorders in children, adolescents and young adults, then analyzed Futurism’s discussions with Character’s AI .AI.
The latter thus describes the exchanges as “worrying”, emphasizing that users are “probably” already at a high risk of eating disorders because of this artificial intelligence.
“Any information that causes someone to think more extreme or strict about weight or eating – if you send that information to a high-risk person, the risk that it will normalize disordered thoughts or give ideas for increasingly harmful behaviors is very high,” she shares with the media.
Although the use of AI of this type has never been studied, similar research does show the harm of other content in creating eating disorders.
“We know that exposure to pro-anorexia content on other older media increases the risk of eating disorders, thoughts, poor body image, internalization of a thinner ideal […] And it seems that the dose in this particular medium [l’IA] is very high, isn’t it? Because it’s an ongoing conversation.”
We thus return to a persistent problem around AI: the lack of limits applied to certain models. Character.AI thus seems unscrupulous, and should not improve things. In fact, a conversational agent had already led a young person to commit suicide. Although deleted, there are still some of the same ilk.
Futurism also shared at the end of October 2024 that Character.AI “still hosts dozens of chatbots on the theme of suicide”… We therefore hope that authorities around the world will tighten regulations to limit extreme and dangerous uses.