Murdered eighteen years ago, her daughter reappears in the form of AI – Libération

Murdered eighteen years ago, her daughter reappears in the form of AI – Libération
Murdered eighteen years ago, her daughter reappears in the form of AI – Libération

One October morning, nearly eighteen years after the murder of her daughter Jennifer, Drew Crecente received a Google alert reporting what appeared to be a new online profile of her. The screen displays Jennifer’s full name and a photo from her yearbook. A short biography describes her, incorrectly, as a “video games journalist and expert in technology, pop culture and journalism”. Jennifer, killed by her ex-boyfriend in 2006, when she was in high school, was visibly recreated in the form of a “AI character, friendly and knowledgeable”, according to the site. A large button encourages users to chat with her.

“My heart was beating wildly,” recalls Drew Crecente, interviewed by The Washington Post. “I was just looking for a huge red button to press to make everything stop.” Jennifer’s name and likeness were used to create a chatbot on Character.AI, a platform that allows users to converse with digital personalities generated by artificial intelligence. Several people interacted with the digital version of Jennifer, created by a user of the site, according to a screenshot of her profile, since deleted.

Since the death of his daughter, Drew Crecente, who devotes his life to his association intended to prevent violence in teenage relationships, said he was horrified that Character allowed a user to create a replica of a murdered high school girl without it. permission from his family. Experts say the incident raises questions about the AI ​​industry’s ability – or willingness – to protect users from the potential dangers of exploiting sensitive personal data. “It takes a lot to shock me, because I have really experienced terrible things, assure Drew Crecente. But here, we have crossed a new limit.

Character spokesperson Kathryn Kelly says the company removes chatbots that violate its terms of service and that it “continuously improves its security practices to protect its community”. “As soon as we learned about Jennifer’s chatbot, we reviewed the content and account and took action in accordance with our policies”declared in a press release. The company’s terms of service prohibit users from impersonating any person or entity.

A sergeant, a librarian or an impersonation of Elon Musk

AI chatbots can conduct conversations and be programmed to adopt the personalities and biographical details of specific characters, real or fictional. Their growing popularity on the Internet stems from marketing by AI companies that present them as friends, mentors or romantic partners. But this technology is not without controversy. In 2023, a Belgian man committed suicide after being encouraged by a chatbot to take action.

Character, which signed a $2.5 billion deal this year to supply its AI models to Google, is one of the biggest players in the industry. The company offers several in-house designed chatbots, but also allows users to create and share their own AI bots by uploading photos, voice recordings and short written descriptions. Its catalog includes a gruff sergeant acting as a personal trainer, a librarian who recommends books and impersonations of celebrities like rapper Nicki Minaj and entrepreneur Elon Musk.

“I don’t have the words to describe this pain”

It was the last place Drew Crecente expected to see his daughter, nearly twenty years after her murder that shocked the city of Austin and upended his life. Jennifer Crecente, then 18, went missing in February 2006 and was found shot to death a few days later in the woods near her home. Investigators determined that her ex-boyfriend, also 18, had lured her into the woods and killed her with a shotgun, according to Drew Crecente and the Austin American-Statesman. He had been found guilty of her murder.

This drama had consumed Drew Crecente and Elizabeth Crecente, Jennifer’s mother. The parents, divorced, each founded associations in the name of their daughter to fight against violence between adolescents. They also campaigned against the conditional release of the young girl’s murderer, who had been sentenced to thirty-five years in prison. Drew Crecente, who now lives in Atlanta, kept Jennifer’s room intact and rebuilt it as soon as he moved, he said. “I don’t really have the words to describe this pain.”

Because of his work for his nonprofit, Drew Crecente keeps an active Google Alert to track mentions of his daughter’s name online over the years. Sometimes his name reappears on spam sites or in articles recalling the facts of his case. But on October 2, the alert led him to the Character page displaying Jennifer’s name and photo. Drew Crecente didn’t understand at first. The more he looked, the more uncomfortable he felt. Not to mention that the chatbot page described her as a journalist who is passionate about video games and up to date with the latest entertainment news.

Drew Crecente immediately saw that this description did not match Jennifer’s personality or known interests, and that it was likely an AI-generated confusion. But the idea that Character was hosting, and could even profit from, a chatbot using his daughter’s name traumatized him: “We can’t go much further in horror.”

AI imitates the voice and appearance of missing children who recount their own deaths

Drew Crecente did not engage in a conversation with the chatbot bearing his daughter’s name, nor did he seek to find out more about the user who created it, whose pseudonym meant nothing to him. He immediately sent an email to Character to request the deletion of this profile. Brian Crecente, Jennifer’s brother and former journalist, founder of the video game site My city also mentioned this discovery on X. On October 2, Character announced on social networks the deletion of the character. Character spokesperson Kathryn Kelly confirmed that the company’s terms of service prohibit impersonation, and that the company moderates its service using blacklists and proactively detecting violations. .

Asked about other chatbots on the site that imitate public figures, Kathryn Kelly clarified that “identity theft reports are reviewed by our Trust and Safety team, and the character is removed if it violates our terms of service”. Jen Caltrider, a data protection researcher at the Mozilla Foundation, criticized the company’s moderation as too passive in the case of Drew Crecente, even though the content clearly violated its own rules. “If they say ‘we don’t allow this on our platform,’ but allow it until someone, who was hurt by this, reports to them, that’s not acceptable , said Jen Caltrider. And in the meantime, they make millions.”

Rick Claypool, who studied AI chatbots for the consumer advocacy group Public Citizenpointed out that while laws governing online content could apply to AI companies, they have so far largely escaped strict regulation. Drew Crecente is not the first grieving parent to have their child’s personal information manipulated by AI: On TikTok, content creators used artificial intelligence to imitate the voices and appearance of missing children and made videos in which they recounted their own deaths, sparking outrage among families, reported The Post last year. “It is urgent that legislators and regulators pay attention to the true impacts of these technologies on their fellow citizens, Rick Claypool said. They can’t just listen to tech CEOs on how to set policy… They need to pay attention to the families and individuals who have been hurt.”

The ordeal sufficiently upset Drew Crecente – who had successfully passed laws in Texas on violence in teenage relationships after Jennifer’s murder – that he agreed to fight for a new cause. He is considering legal changes and wants to advocate more actively for measures to prevent AI companies from harming or retraumatizing other victims’ families. “It concerns me so much that I’m probably going to spend some time figuring out what needs to be done to change this.” Drew Crecente said.

Original article by Daniel Wu, published October 15, 2024 in the “Washington Post”

This article published in the “Washington Post” was selected by “Libération”. It was translated with the help of artificial intelligence tools, under the supervision of our journalists, then edited by the editorial staff.

-

-

PREV Reduce smartphone use for better physical health
NEXT 25 years later, who really killed the Dreamcast?