In recent years, progress in artificial intelligence has suggested a real revolution in the uses developed from these technologies.
The most significant future development in artificial intelligence is the transition from AI tools to autonomous agents.
Concretely, artificial intelligence tools include software applications that use artificial intelligence algorithms to perform tasks generally performed by humans. These tools can, for example, be natural language processing, computer vision, machine learning or even robotic process automation tools. The characteristic of artificial intelligence tools is that they use data to carry out predefined tasks in an optimized manner.
These artificial intelligence tools will, in the months and years to come, be supplanted by what we call autonomous agents. An autonomous agent is a combination of AI tools that will collect data itself to operate in an optimized and autonomous manner. Integrating different AI tools into the same model allows them to be used for more complex tasks. Where an AI tool optimizes a specific task for which it is trained, autonomous agents break down complex problems into several stages, autonomously, to be able to propose the actions necessary to resolve this problem. With the ability to modify their approach if necessary, based on the intermediate results obtained by “dissecting” the problem.
We therefore see that the coming advent of autonomous agents will multiply the potential of artificial intelligence for its users: the ability of autonomous agents to make decisions themselves regarding the resolution of a problem completely changes the way of understand how artificial intelligence can intervene in daily life and in the functioning of businesses.
Security issues also arise from these new capabilities. Indeed, autonomous agents are able to browse the internet, take control of computers, they also have their own memory, and access to large language models, which allows them to analyze and synthesize large amounts of text.
One of the concerns related to the development of autonomous AI agents is the motivation behind decision-making, and the ability of the AI to be aligned with what the operator requests. This issue of alignment, or rather non-alignment, when we approach AI security, is increasingly present as artificial intelligence models operate autonomously.
Fears of seeing AI models, to achieve the requested objectives, use unethical methods or practice deception have already materialized in certain developed models. The unpredictability of their behavior is also a concern, especially when we are talking about AI applications that are allowed to operate autonomously.
Then, the risks of cyberattacks are much more important with autonomous artificial intelligence agents, precisely because of their automation and their connection to the internet. They are much more exposed to intrusion attempts, which may result in a modification of their behavior, the theft of private or sensitive data, or their use to commit malicious acts. For industry, for example, it is easy to imagine to what extent these risks are incompatible with the criticality of certain applications.
These risks, the list of which does not end there, require regulators to take a broad look at the security standards to be put in place around these increasingly autonomous AI applications, to ensure that the latter do not do not cross a red line, whether in terms of ethics, loss of control or behavior.