Imagine a little assistant always at your side. A sort of Jiminy Cricket of the future, who answers your questions, gives advice and above all acts in your place. It accesses your calendar, your conversations, your favorite applications… An unanswered email? He writes and sends. Has your youngest not had his teeth checked in a year? He makes an appointment. Did your pair of shoes find a buyer on an app? He validates the matter. Practical, although a bit distressing.
Stronger than chatbots like ChatGPT and its 200 million active users per month, here are the “agents”. Their promise: not to simply produce content from a simple instruction, but to act autonomously. “We are returning from the illusion that large language models, these AIs trained on enormous quantities of text to produce responses close to humans, have knowledge of the world”, cconfirms Pierre-Carl Langlais, researcher and co-founder of the startup Pleias, which develops open-source language models, therefore freely accessible. Thanks to the “agents” we are propelled towards a new era. That of a proactive AI, capable of thinking and organizing itself.
With its “agents”, Salesforce claims to be the company that will make AI profitable
New interactions
From Google to OpenAI to Meta, all industry players are looking at this new generation of artificial intelligence. But for the moment, everyone is trying to figure out what this Jiminy Cricket looks like. OpenAI and Meta, in particular, want to create multitasking and general systems. For others, like the French LightOn and H or the American software giant Salesforce, it is better to « agents » hyperspecialized on specific tasks. Less omnipotent, but easier to implement.
The difficulty is also to find how we will address them. In recent weeks, it is an audio format that has stood out. The tool is from Google and is called NoteBookLM. Its recent “Audio Overview” feature, which synthesizes documents in a few clicks and transforms them into a ten-minute podcast, has amazed the AI community. The two virtual hosts, always with a warm and enthusiastic tone, talk about anything: a 250-page report on the environment, a very in-depth scientific article, a 2-hour YouTube video… But also about everything other more trivial document such as a LinkedIn profile, text message exchanges or even a bank statement [sic]. Some Internet users’ experiments have gone viral, shared millions of times.
With its “agents”, Salesforce claims to be the company that will make AI profitable
If NoteBookLM is not an agent, its success testifies to the growing interest in new interactions with the machine, beyond text and writing. This was also the goal of the small Google Lab team. Raiza Martin, the product manager of this experimental project that Google is now considering marketing, confirms this to The Tribune. “Our goal was to help users interact with sources. Audio was a good vector, but a summary read by a synthetic voice is not enough. We wanted to create something enjoyable, useful and new.”
Tech companies are also interested in how to add reasoning and planning capabilities to generative AI. Essential skills to be able to create these famous “agents”. In September, OpenAI presented its new family of AI models, which is called “o1” because it “resets the counters to zero”. It is capable of reasoning better than today’s AI by organizing “chains of thoughts” before responding, that is to say by breaking down its response to the problem posed into stages.
o1 is a first step towards the AI autonomy promised by the industry. Mark Zuckerberg, the founder and director of Meta, talks about a future where each content creator on Instagram could have their own “agent”, capable of managing their interactions with their community for them. Anthropic, the American outsider created by two former OpenAI employees, recently impressed with its demonstration of an AI capable of using a computer autonomously. To the general public, the video looks like nothing more than erratic mouse clicks. But for specialists, this functionality called “Computer use” is an essential step. “This iterative process of trial and error, made possible by interaction with the computer, allows the AI to learn from its own experiences,” notes Edouard Grave, researcher and co-founder of the French laboratory Kyutai.
If the industry is looking for new impetus, it is partly because a “wind of disillusionment is blowing at the moment”, admits Sylvain Duranton, director of BCG X, the tech entity of the consulting firm. Particularly among companies, which believed a lot in the superpowers of large language models to improve their performance, and were ultimately disappointed. According to various studies carried out by the consulting firm, two thirds of companies do not see results after investing in generative artificial intelligence.
Generative AI in business, instructions for use
“The companies were not ready, they underestimated the preparation work,” says Sylvain Duranton. To truly exploit technology, you have to change the way you work, it’s not magic. We need to completely review certain processes and clean up company data”he judges. The few companies that have made this effort are succeeding in profiting from it, partly in sales, marketing and supply chain management services. [voir plus bas, Ndlr]. Some would even earn “tens of millions of euros in margin”, according to BCG X.
A future virtual colleague
Emilie Sidiqian, general manager of Salesforce France (read the interview on page V) also confirms this period of disenchantment in the professional world. For her, current chatbots are “disappointing” because they mostly just write emails and report on meetings. On the contrary, the « agents » promise to become a quasi-virtual colleague who can respond to a client, guide them, set goals and implement them. But Sylvain Duranton warns: the deployment of “agents” will not be done without clashes. “Just because technology improves doesn’t mean businesses will be more ready,” he alerts.
For some observers, the limits are not only organizational, they are also technological and environmental. “It is becoming complicated to advance language models without pushing the limits of our energy consumption,” points Edouard Grave. The researcher also mentions another glass ceiling: that of accessible data. If ChatGPT and its friends do so well at certain tasks, it is because they were able to absorb all the textual data from the Web. But this is not enough to go further, in particular towards finer and more expert knowledge. “Having high-quality data is a long and expensive process because it requires human annotation, remarks Edouard Grave. To create an AI expert in mathematics for example, “academics would have to be called upon to create a large number of problems and solutions to submit to the models”, he specifies.
Furthermore, language models lack contact with the outside world to become truly autonomous. “Their limits are their errors, their absurdities, in short their lack of alignment with the physical world,” estimates Pierre-Yves Oudeyer, research director at Inria. According to him, one of the keys is to anchor them in an external environment. By connecting them to a computer as Anthropic did, but also to the physical world via a robot. In his laboratory, the researcher and his colleagues even try to instill a form of curiosity in machines. “The idea is that the AI sets interesting goals for itself”an intrinsic motivation which would push them to be interested in their environment.
To do this, his team is interested in the functioning of the human brain. “Research shows that we learn best when a topic starts out complex but eventually progresses. If the subject is too easy, or too difficult, we lose interest. » This mechanism applied to machines could allow them to become more autonomous.
“Using AI is a question of life and death for a company” (Emilie Sidiqian, France director of Salesforce)
“Agents” capable of reasoning prefigure the more uncertain advent of super-intelligences, or even “artificial general intelligence”. These concepts, far from being unanimous in the scientific world but favored by businesses, designate AIs that surpass humans in all of their capabilities. Before achieving this, Edouard Grave believes that many technological obstacles will have to be unlocked. Like others, the researcher doubts that generative AI is enough to create super-intelligence. “But it is impossible to predict anything with confidence, he warns. If two years ago I had been warned about the progress made by language models and the race for computing power that this has caused, I would have found it hard to believe it.
This Grail also comes up against a ceiling that artificial intelligence is already facing: planetary limits. (see page VI). The issue is also to know if this is really the social project that we want. To what extent are we ready to delegate our work, even our daily lives, to these agents?