Behind GPT-Next, the long-awaited second wave of generative AI is about to break

Behind GPT-Next, the long-awaited second wave of generative AI is about to break
Behind GPT-Next, the long-awaited second wave of generative AI is about to break

“GPT-Next”. This is how OpenAI named its next language model, expected for over a year. The darling of AI organized demonstrations of its latest innovations in the four corners of the world, including Paris. The objective: to give a taste of its next tools, from the Sora video generator, to voice cloning in 15 seconds, including the real-time image recognition capabilities of its voice assistant.

OpenAI is preparing the ground for the second wave of generative AI, which should be triggered in the second part of the year, and of which it intends to spearhead, even if the competition will be fiercer than at the release of ChatGPT. At stake: the future of a sector that relies on significant, regular progress to meet its promises.

A year of technological status quo

In March 2023, OpenAI released GPT-4, a new version of its most powerful artificial intelligence model. The latter marked a real technological leap compared to its predecessor. In particular by its ability to read and generate images. The problem ? A year later, no model is really doing better. That is an eternity on the scale of generative AI, a technology described as a revolution in the Internet by the most optimistic.

However, if GPT-4 brought out the first use cases of generative AI, its limitations quickly became apparent. Companies are struggling to deploy it on an industrial scale, for various reasons: sometimes AI is not efficient enough, sometimes it costs too much. Most often, managing hallucinations [les situations où l’IA présente comme vérité des faits inventés, ndlr] proves too laborious for generative AI to offer an obvious return on investment, despite the numerous figures for productivity gains put forward by the tech giants.

Artificial intelligence: why summer will be decisive for the future of OpenAI

Granted, GPT-4 received some marginal improvements throughout the year, including a facelift under the name “GPT-4 Turbo.” But the next step, the new technological leap, is slow to arrive. The inability of the number one in the AI ​​race to release its next model has allowed the competition to catch up. The models from Google, Meta, Anthropic, Mistral and Cohere have come close to or equaled the performance of GPT-4. Except that none surpassed it. Enough to raise initial concerns: what if the AI ​​power curve was not exponential, as anticipated by the market? And if the performances already met https://twitter.com/bindureddy/status/1793967098412388770 ?

GPT-Next to the rescue

In this context, OpenAI’s communication around GPT-Next is timely. Sam Altman, the iconic co-founder and director of OpenAI, has been talking about the development of GPT-5 at regular intervals since November 2023, but he has remained stingy with details. Now, the darling of AI officially promises a new “ technological leap » between GPT-4 and its next model, and above all, it is committed to releasing it « later in the year “.

For his part, Microsoft technical director Kevin Scott compared the supercomputer [construit par Microsoft] on which GPT-Next is trained to a “ whale “. This, while GPT-3 had been trained on “ a shark » and GPT-4 on “ an orca “. Behind this order of magnitude comparison between marine animals lies a message: GPT-Next needs much more power than its predecessor because it will likely be much larger and more efficient. Enough to open up new perspectives for use cases?

Artificial intelligence: Google asserts itself as the number one threat to OpenAI

On the other hand, unlike in 2023, OpenAI is no longer in a quasi-monopoly situation. Meta plans to release the most powerful version of its Llama 3 model this summer, and Anthropic also promises a technological leap before the end of the year for its Claude model.

As for Google, it now manages to align a competitor with each OpenAI model, and there is no doubt that it is working on the next version of its Gemini model. In other words, the race for performance has never been so open, and for the first time, the spearhead role of OpenAI is back in play. The second wave of generative AI should be much more condensed than the first one.

While waiting for the next generation of models to be presented, OpenAI and Google have focused on optimizing the current generation. Announced last week, the GPT-4o and Gemini Flash models cost significantly less and generate their results much faster than the original version of GPT-4 and Gemini. This bodes well for the future of industrial uses, where the cost of use remains one of the main barriers to adoption.

An entire ecosystem embedded in the race for performance

Although the race for performance is only one of the many challenges of the generative AI revolution, it weighs heavily in the sector’s prospects. The exceptional financial health of Nvidia, which thwarts very favorable forecasts every quarter, is the best example of this.

Tech giants, cutting-edge startups and some EStates continue to equip themselves with colossal volumes of Nvidia processors – today essential for training AI – because they believe they can do always better with always more. Another example: earlier in the week, Scale AI, a startup essential in labeling AI training data, again raised $1 billion at a valuation close to $14 billion.

If investment continues to flow freely, it is also because AI developers do not intend to stop at this second stage. OpenAI, Google and Anthropic clearly state that they are aiming to develop an AGI [artificial general intelligence, ndlr], an AI capable of reproducing or even surpassing the capabilities of the human brain. An ideal that even the most optimistic do not see coming to fruition for five to ten years.

But before getting there, many adjustments remain to be made: AI must be more powerful, but also more precise and more reliable. The first wave of generative AI features suffer from too many shortcomings, and every month brings its share of hiccups.

Latest example: Google’s new AI, which responds to Internet users’ searches with a paragraph, suggests putting glue in pizzas to prevent the cheese from leaking. In other words, if the first wave of generative AI held out a bright future, it will be up to the next waves to make it a reality. And this, from the second half of 2024.

-

-

NEXT Good deal – The Netatmo connected object Connected thermostatic heads for radiators Additional “5-star” valve at €59.99 (-22%)