DayFR Euro

AI, beneficial or dangerous? Here is what is in place to protect consumers

Artificial intelligence is a divisive technology: it is undeniably at the heart of the technological advances of our era, but those who study and participate in its development must themselves warn the public.

One of the recipients of the 2024 Nobel Prize in Physics, John Hopfield, although rewarded for his work on “automatic learning” of artificial intelligence, himself warned society against a possible loss of control and understanding of AI, which he called “very worrying.”

But do we realize what artificial intelligence represents in our daily lives? And above all, does this presence in our everyday life involve any dangers? Doctor Felix Pflücke, post-doctoral researcher at the University of Luxembourg, answers our questions.

How AI fits into our daily lives

A further step in the relationship between the public and artificial intelligence has been taken with the development of chatbots (generative artificial intelligence), such as ChatGPT in 2022, to cite one of the most concrete and widespread examples. However, AI is already present in a tool that we use almost daily: social networks. “From algorithms, artificial intelligence suggests content more suited to each consumer, based on their searches, the time spent watching videos, based on the content they like…,” explains Felix Pflücke . “AI also makes it possible to give refined suggestions in relation to the consumer, for example when they wish to make online purchases,” continues the researcher.

AI also works well in other strata of society: “The banking sector uses artificial intelligence a lot. Some administrations also use it, for example when a visa must be generated,” adds Felix Pflücke, author of several studies on the subject of artificial intelligence.

Regarding the positive aspects of the use of artificial intelligence, Felix Pflücke believes that this technology “makes it possible to facilitate many processes and to benefit from tailor-made content useful for the consumer”.

A regulation to protect against potential abuses

However, there are nuances to be made. During a conference organized on Friday October 25 at the European Parliament, Juha Heikkilä, advisor on artificial intelligence to the European Commission, declared that “the protection of fundamental consumer rights must involve a specific act of regulation”. Thus was born the “AI Act”, or “Artificial Intelligence Regulation”. Entering into force on August 1, it sets a legal framework for the use of AI within the European Union. In particular, we find the following objectives:

  • Protection of health, safety and fundamental rights

  • An approach based on risk assessment

  • Intervention when necessary

  • Operation consistent with existing laws

Juha Heikkilä thus presented a pyramid of risks, divided into four levels: “unacceptable”, “high”, “limited” and “minimal”. Felix Pflücke, who spoke alongside Mr. Heikkilä at this conference, explains what these levels correspond to: “The unacceptable risk is reached when, for example, a government uses AI to carry out a social rating on citizens . The high risk arises when AI is used in sensitive sectors such as health, education or finance. The limited risk concerns chatbots: this simply requires transparency. Finally, minimal or no risk applies for example to spam filters, which work with artificial automation.

According to the researcher, “the fines can be very high: they can reach 35 million euros, or up to 7% of a company's turnover.” However, we will still have to wait: the implementation of the AI ​​Act is being done gradually and all the provisions should be implemented within three years maximum.

A case as dark as it is unprecedented questions the relationship between AI and humans

For several days, AI has been at the heart of a dark affair: Sewell Setzer III, a 14-year-old American teenager, killed himself after developing an obsessive relationship with an artificial intelligence, from the Character application AI. His mother, Megan Garcia, filed a complaint, believing that this technology was directly responsible for her son's death. This is one of the first deaths linked to an interaction between a human and artificial intelligence.

As the investigation continues, this case raises the question of whether or not one or more people participating in the development of artificial intelligence are criminally responsible. Did Character AI actually incite the teenager to commit suicide? Can the app developers be sued? The answers to these questions can only be provided at the end of the investigation.

-

Related News :