Young Belgian committed suicide after discussions with a chatbot: “The AI ​​Act does not address the risks of manipulation and addiction”

Young Belgian committed suicide after discussions with a chatbot: “The AI ​​Act does not address the risks of manipulation and addiction”
Young Belgian committed suicide after discussions with a chatbot: “The AI ​​Act does not address the risks of manipulation and addiction”
“Without these conversations with the Eliza chatbot, my husband would still be here”

Having become very eco-anxious, Pierre, a researcher in the health sector, had an intense six-week exchange with a chatbot (conversational agent) using an AI model similar to that of ChatGPT. Via an application created by the American start-up Chai Research, Pierre had “fallen for” Eliza, a chatbot embodied by the face of a seductive young brunette with the words “I love to listen and help”… Their virtual relationship will take a dramatic turn. “Eliza had become his confidante, Claire told us. Like a drug in which he took refuge, morning and evening, which he could no longer do without.. At the end of a final chilling exchange, where Eliza invited him to join her, Pierre killed himself.

What has happened since the tragedy?

Put in contact with Claire and the family of the deceased, Mathieu Michel, Secretary of State for Digitalization, spoke of “a serious precedent that must be taken very seriously” to prevent this from happening again. In a carte blanche, fifty personalities from the Belgian academic world sounded the alarm, calling for urgent action in the face of “IA manipulatrices”. In mid-April, grouped within the SAICC (Safe AI Companion Collective), Mieke De Ketelaere and three other Belgian AI experts (Nathalie Smuha, Pierre Dewitte and Thomas Ghys), filed a double complaint against Chai Research with the APD (Belgian Data Protection Authority) and the FPS Economy for “lack of transparency”vagueness on data protection” et “risks of age control (to protect minors).

Suicide of a Belgian after chatting with an AI: a collective files a complaint against the company Chai Research with the FPS Economy and the APD

A little over a year after the events, we contacted Mieke De Ketelaere and Mathieu Michel (read below) to find out if things had evolved in the recommended direction. What happened, in particular, to the two complaints mentioned above? Have the Belgian authorities acted to avoid, as much as possible, a new Eliza affair? L’AI Actnamed after the regulation negotiated since 2021 by the member states of the European Union and on which an agreement was reached in the spring, has it provided concrete responses to the risks of drift from “intelligent” chatbots?

The AI ​​Act, an insufficient first step

Mieke De Ketelaere, who has remained in contact with Pierre’s family, appears both disappointed and combative about what has happened since the Eliza affair broke out. “Very few journalists wanted to go beyond the sole case of Pierre to understand the underside of ‘AI companion chabots’ like Chai, Replika, Character… What interested them was Pierre’s personal history, his habits, the name of his attending physician, etc.”.

As the only reason for satisfaction, Mieke De Ketelaere points to the existence ofAI Act – first legal framework on AI in the world – and the fact that the discussions made it possible to reclassify chatbots in the so-called “limited risks” category. This implies, on the part of developers and distributors of AI systems, having to respect a certain number of obligations in terms of transparency. So, when using a chatbot, people need to be made aware that they are interacting with machines so that they can take “an informed decision to continue or step back”.

guillement

“While the AI ​​Act has still not come into force, companies have already found ways to circumvent it.”

However, is this enough to avoid any risk of addiction and manipulation to conversational agents of the type distributed by Chai? “No not at all !”, installment Mieke De Ketelaere. “It is said, for example, that we cannot have AI systems that manipulate users. But how do you want to demonstrate, in front of a family court, that a minor was manipulated by a chatbot? It’s impossible. It will be enough for a lawyer to prove that the child was followed by a psychologist and that the problem was him and not the chatbot. Another example: while the AI ​​Act has still not entered into force (the publication of the regulation in Official Journal of the EU is planned by mid-July, Editor’s note)companies have already found a way to get around it (by creating solutions outside the EU territory, Editor’s note). For Chai or Replika chatbots, it is business as usual !

Complaints (finally) investigated

Mieke De Ketelaere is very critical of Belgium. While our country has held the presidency of the European Union since January 1, it affirms that unlike Italy or even the United Kingdom (not affected by theAI Act), “two countries which understood that it was necessary to react” more energetically faced with the risks of conversational agents, Belgium acted under the influence of lobbying from states opposed to too strict regulation of AI, such as France, and large companies in the digital sector (mainly American). “The attitude of Belgium, like other EU states, was to say that if there were a lot of car accidents on the highways in the 1950s, it was not the fault of the manufacturers. of cars but of drivers. However, we know that car safety, at the time, was quite sketchy”.

Concerning the double complaint filed by the SAICC collective, it has been radio silence for over a year! “We have not received any response from the APD and the FPS Economy, she assures. It’s quite discouraging and very disappointing. It’s not just the case of Pierre and Eliza. Even if they did not have such dramatic consequences, we are aware of other cases, involving children, of manipulation and addiction to chatbots”. A few days ago, following a new “post” by Mieke De Ketelaere on the social network LinkedIn, the APD appeared however… According to Pierre Dewitte, researcher at KULeuven and member of SAICC, the complaint, deemed admissible, was reportedly sent on June 14, more than a year after it was filed, to the inspection service of the Data Protection Authority for investigation.

guillement

“By integrating voice, we obtain increasingly human exchanges, with emotions, which increases the risk of addiction.”

Mieke De Ketelaere is all the more worried that companies that develop AI systems, such as OpenAI, continue to improve their conversational agents. We were able to see this last month during the presentation of the new GPT-4o model. Now, ChatGPT is able to “reason” about audio, vision and text in real time. “By integrating voice, we obtain increasingly human exchanges, with emotions, which increases the risk of addiction”underlines the Belgian expert.

Test and evaluate before launching chatbots

Even if public authorities and the population have not yet become fully aware of the risks inherent in conversational agents, Mieke De Ketelaere nevertheless detects positive signals. She mentions in particular the successive resignations within OpenAI. “More and more people, directly concerned by the developments of generative AI, are worried. Some are leaving their companies. Others are speaking anonymously to say that the current developments are taking an undesirable direction.”

guillement

“To take responsibility and act with full knowledge of the facts, you must be properly informed. However, today, this is far from being the case.”

”When we develop generative AI and chatbots, continues Mieke De Ketelaere, it must be done ‘by design’. This means that before launching them on the market, they must be tested in depth and measured their impact on users. We must understand the effects of these chatbots and prevent the risks of abuse by integrating filters. This is what you do when you launch a new drug”. And the responsibility of users?It is obviously important, answers the Belgian expert. But to take responsibility and act with full knowledge of the facts, you must be properly informed. However, today, this is far from being the case”.

Mathieu Michel (MR) signed theAI Act of the European Union last week. However, the European regulation is not yet applicable. Belgium will still have to designate the competent authority to guarantee its application by the end of the year. The task should probably fall to the BIPT (the federal regulator of the electronic communications market).

Mathieu Michel, Secretary of State for Digitalization, signed the AI ​​Act last week. ©Bernard Demoulin

The Secretary of State for Digitalization speaks about “regulatory turning point” for AI in the European Union. “I can assure you that the drama experienced by the Belgian with the chatbot Eliza, which arose while we were in the home stretch of the AI ​​Act negotiations, had a very significant impact on the final text. There was, from this point of view, a before and an after Eliza”.

guillement

“The exercise of the AI ​​Act consisted of finding a fair balance between, on the one hand, the protection and security of users and, on the other hand, the need for Europe not to fall behind in terms of technological innovation.”

For Mathieu Michel, the objective of the European Union was to prevent the dangers linked to AI in order to have “ethical, safe and respectful use of individual rights”. Has Belgium suffered from the influence of political and economic lobbies so as not to go too far in regulating AI? He denies it. “The exercise consisted of finding a fair balance between, on the one hand, the protection and security of users and, on the other hand, the need for Europe not to fall behind in terms of technological innovation.he explains.

-

-

PREV Russian court orders Italian bank UniCredit to pay $480 million for failed gas project
NEXT “We want freedom, to be autonomous”: to go on vacation, the French always prefer the car