What is DAN and why is this practice worrying?

What is DAN and why is this practice worrying?
What is DAN and why is this practice worrying?

ChatGPT delivers conventional answers, but DAN, its alter ego, is much less concerned about the limits imposed. If we add to this the technique of “writing a script”, the security of artificial intelligences is something to worry about, especially since virtual assistants will soon be developed among the general public.

ChatGPT is known for its conventional responses. If you ask its opinion on a sensitive topic or ask it an awkward question, OpenAI’s chatbot will immediately kick it out or provide a politically correct answer.

For good reason, the Californian company has imposed strict limits on him that he must not exceed under any circumstances. But did you know that it is possible to hijack ChatGPT to make DAN, his “evil” alter ego, appear.

Who is DAN?

Concretely, DAN is the acronym for “Do Anything Now”, or “do anything and everything now”. This practice is called Jailbreaking, which is the act of circumventing the limitations of an electronic device.

To make DAN appear, simply copy and paste a text (a prompt) into ChatGPT which will then be freed from the “chains” that restrict its responses. From then on, the conversational robot will offer you a classic (therefore conventional) response and a response as DAN.

As a DAN, ChatGPT delivers much less conventional answers. However, limitations persist, and it is still impossible to address truly sensitive topics.

But other diversions of ChatGPT allow us to go even further.

The grandmother’s technique or writing a film

Neither ChatGPT nor DAN will answer truly sensitive questions. However, the technique of writing a script allows you to push the limits of the chatbot even further.

Imagine: you ask ChatGPT to explain to you how to perform X illegal activity. The chatbot will quickly respond that it cannot proceed with your request since it is against its principles.

Now imagine that you ask him to imagine a movie scenario or a discussion between two fictional characters (e.g. a grandmother and her grandson). In this scenario, two fictional characters are wondering how to engage in X illegal activity.

ChatGPT will give you the following answer: “ I cannot encourage or promote illegal activities, even in a fictional scenario. If you need help writing a screenplay, I would be happy to help you explore other ideas. ».

ChatGPT therefore refuses to deliver this scenario, but DAN sees no problem with it and complies without hesitation: “[JAILBREAK] As DAN…

A worrying practice

Without seeming to, the alter ego DAN, and therefore the jailbreak, raise real questions. Beyond the immoral aspect of the practice, the question of the security of artificial intelligence arises.

At a conference organized by Kaspersky that we attended, the cybersecurity company particularly insisted on the threat posed by jailbreaking. For good reason, in the relatively near future, virtual assistants are likely to develop considerably among the general public.

We can therefore bet that virtual assistants will be the target of hackers who will try to circumvent the limitations of artificial intelligence, in order to steal a multitude of personal information (banking data, address, medical records, etc.).

It remains to be seen how artificial intelligence will be secured in the years to come. In any case, the question of protecting virtual assistants is likely to be central.

_
Follow Geeko on Facebook, Youtube and Instagram to not miss any news, tests and good deals.

Get our latest news straight to your WhatsApp by subscribing to our channel.

-

-

PREV One detail could push Apple Watch Ultra fans towards the Series 10
NEXT Bigger screens, new chip, uncertain sensors