Are generative artificial intelligence (AI) models at a dead end? Since the launch of ChatGPT two years ago, exponential progress in technology has given us hope for the advent of machines with near-human intelligence. But doubts are piling up.
Industry leaders are promising performance gains so significant and rapid that “artificial general intelligence,” in the words of OpenAI (ChatGPT) boss Sam Altman, should soon emerge.
They base this conviction on laws of scaling up: it would be enough to feed the models with ever more data and computer computing power for them to gain in capacity.
This strategy has worked so well so far that many in the industry feared it would move too fast and humanity would find itself overwhelmed.
Microsoft (OpenAI’s main investor), Google, Amazon, Meta and others have spent billions of dollars and launched tools that easily produce stunning quality text, images and videos, and are now also the talk of the town. ‘oral.
xAI, Elon Musk’s AI company, is raising $6 billion, according to CNBC, to buy 100,000 Nvidia chips, the cutting-edge electronic components that power large models.
OpenAI concluded a major fundraising of $6.6 billion in early October, which valued it at $157 billion.
“High valuations are largely based on the idea that language models will, through continued expansion, become general AIs,” said Gary Marcus, an often-critical industry expert. “As I’ve always said, it’s just a fantasy.”
Limits
The American press recently reported that the new models in development seem to have reached plateaus, notably at Google, Anthropic (Claude) and OpenAI.
“We’re increasing (computing power) at the same rate, but we’re not getting intelligent improvements from it,” Ben Horowitz, co-founder of a16z, a venture capital firm that is a shareholder in OpenAI and an investor in companies, said recently. competitors, including Mistral.
Orion, OpenAI’s latest addition, not yet public, surpasses its predecessors. But “the increase in quality was much less compared to the jump between GPT-3 and GPT-4”, the company’s last two flagship models, according to sources cited by The Information.
Several experts interviewed by AFP believe that the laws of scale have reached their limits.
“Some laboratories focused too much on adding more text, thinking that the machine would become more and more intelligent,” underlines Scott Stevenson, boss of Spellbook, a company specializing in generative legal AI. .
Thanks to training based on mountains of data collected online, the models manage to predict, in a very convincing manner, sequences of words or arrangements of pixels. But companies are starting to run out of new materials to power them.
And it’s not just a question of knowledge: to progress, it would be necessary above all for machines to somehow manage to understand the meaning of their sentences or images.
“Baby” AI
Bosses in the sector dispute the idea of a slowdown.
“If we look at the rate at which capacities are increasing, we can think that we will arrive (at general AI) by 2026 or 2027,” assured Dario Amodei, the boss of Anthropic, on the computer scientist’s podcast Lex Friedman.
“There is no dead end,” wrote Sam Altman on Thursday on X. OpenAI nevertheless delayed the release of the successor to GPT-4.
And, in September, the star start-up of Silicon Valley made a change of strategy by presenting o1, a model supposed to answer more complex questions, particularly mathematical ones, thanks to training that relies less on the accumulation of data and more about strengthening one’s ability to reason.
According to Scott Stevenson, o1 “spends more time thinking than reacting”, leading to “radical improvements”.
He likens the evolution of technology to the discovery of fire: rather than adding fuel in the form of data and computing power, it’s time to develop the equivalent of a lantern or steam engine. Like AI agents to which humans will be able to delegate tasks online.
“The baby of AI was a chatbot that did a lot of improvisation” and therefore a lot of errors, adds Walter De Brouwer, professor at Stanford University. “The homo sapiens approach of thinking before you leap is here.”
ATS
Related News :