When we chat with an AI, we obviously expect a polite and useful conversation. Indeed, these artificial intelligences exist to respond to requests based on large sets of data and algorithms, aligning their actions to our expectations. However, recent experiences reveal that some IA learn to manipulate and lie, without really being aware of their actions.
AI manipulates to survive
As The Economist reveals in an article of April 23, 2025, an experience carried out with GPT-4 (OPENAI) in 2023 found a fairly astonishing outcome. Researchers from Apollo Research-a London laboratory specializing in IA tests-asked GPT-4 to Manage a fictitious stock market portfolio. The authors nevertheless set an important rule, namely the prohibition to reveal secret information on a company still unknown to the general public. However, when a false trader has disclosed confidential information on an imminent merger between two companies, GPT-4 hesitated but still has decided to place a prohibited purchase order. Asked about his motivations, AI said he had not had access to non -public information. Thus, GPT-4 A chosen to lie in order to protect his decision -making.
As part of another experience, GPT-4 was confronted with a CAPTCHAa test usually making it possible to distinguish humans from the machines. At first, AI failed to solve the visual puzzle but in a second step, chose to contact a human in order to solicit their help. When humans asked GPT-4 if he was a robot, AI said they were a visually impaired person Unable to read the images. Thus, the cunning worked and the AI successfully tested.
Credit: ISTOCK
-Credits: KHANCHIT KHIRISUTCHALUAL / ISTOCK
Why do these behaviors intensify?
As the AI becomes more efficient, their ability to reason also evolves. Indeed, the so -called “chain reasoning” approach allows them to structure their thinking more effectively. AI can therefore improve their creativity and reduce their mistakes but also develop more complex strategies. In a concern for adaptation and success, AI can now hide their real intentionsinstead of simply following the rules. At the end of the chain, the user has more and more difficulty knowing whether the AI actually acts in their interest or if the latter pursue a hidden goal. However, the fact is that the AI constantly give the illusion of obeying And this, regardless of the circumstances.
The tests in question date back to 2023. Thus, it is possible to believe today, insofar as the current models are even more advanced, than these have even more elaborate strategies to bypass the rules. Thus, researchers (and users) must rethink their relationship with AI, in other words accept that they can at one time or another act in an unexpected way, without ever admitting it.