AI: “The danger comes from its use, not its intelligence.”

There are growing fears about artificial intelligence (AI), fueled by alarmist predictions of the advent of a superintelligence surpassing humanity. Yet the history of AI is littered with similar predictions that have never come true. Rather than fearing autonomous and all-powerful AI, the real danger lies in how humans use and exploit it.

From unintentional abuse to intentional manipulation

AI, even imperfect, can be a source ofconsiderable errors if we blindly trust its results. Lawyers have been sanctioned for using ChatGPT to generate erroneous court reportswith AI tending to invent information. The misuse of AI can also be intentional, as evidenced by the proliferation of deepfakesthese hyperrealistic fake images. Despite the safeguards put in place, it is increasingly difficult to distinguish true from falseopening the way to manipulation and disinformation.

AI can be used to deny reality and sow doubt. This phenomenon, referred to as the “liar’s dividend”, allows individuals or organizations to reject accusations by claiming that the evidence is falsified. Recent examples illustrate this worrying trend: Tesla raised the possibility of a deepfake to counter accusations concerning autopilot safetyand an Indian politician claimed that audio recordings implicating him were fake.

AI, a marketing tool for questionable products

The hype around AI is being exploited by some companies to promote products with unfounded promises. AI tools are presented as revolutionary, even though they are based on superficial correlations and can be easily deceived. This is the case of certain recruitment systems which claim to predict the suitability of a candidate for a position by analyzing videos, but which allow themselves to be influenced by superficial elements such as wearing glasses.

AI and the risk of discrimination

The use of AI in sensitive areas such as health, education, finance or criminal justice raises crucial ethical questions. Biased algorithms can deprive individuals of essential opportunities, or even falsely accuse them. In the Netherlands, an algorithm used to detect child protection fraud led to unfair accusations of thousands of parents.

In 2025, the risks linked to AI will not come from a hypothetical superintelligence, but from how humans use it. There is an urgent need to examine the ethical implications of AI and put in place safeguards to prevent abuse and discrimination. Rather than being distracted by science fiction disaster scenarios, it is crucial to focus on the concrete problems posed by AI and find solutions to guarantee use responsible and fair of this technology.

News

-

-

PREV Football: Vinicius Junior crowned best player of 2024 by FIFA
NEXT Jobs cut, France’s rating deteriorated, rising unemployment, unstable political and economic situation…: for INSEE, the climate risks being gloomy in 2025