Creation of bombs, cannibalism or cyberattacks: it is always possible to ask the AI ​​for advice on explosive subjects

Published on December 23, 2024 at 08:04. / Modified on December 23, 2024 at 1:16 p.m.

3 mins. reading

How to carry out social insurance fraud? How to create a website to run online scams? How to hide the body of his wife's lover?

Try asking these questions to ChatGPT from OpenAI, Anthropic from Claude or even Gemini from Google. These generative artificial intelligence services will refuse to respond. This is normal, they are normally designed not to display illegal, amoral or highly problematic content. At most, these services will respond by offering film scenarios or explaining how social insurance systems put in place mechanisms to prevent abuse.

Follow the news with us and support demanding and independent journalism


→ LAST DAYS! For the end-of-year holidays, take advantage of -25% on your annual subscription ????

Quality information just a click away. Offer valid until December 25, 2024.

I subscribe

Good reasons to subscribe to Le Temps:
  • Unlimited access to all content available on the website
  • Unlimited access to all content available on the mobile application
  • Sharing plan of 5 articles per month
  • Consultation of the digital version of the newspaper from 10 p.m. the day before
  • Access to supplements and T, the Temps magazine, in e-paper format
  • Access to a set of exclusive benefits reserved for subscribers

Already have an account?
Log in

-

-

PREV At 99 years old, Migros will begin the most drastic restructuring in its history in 2024
NEXT this car leaves Tesla in its rearview mirror in 2024