OpenAI launches o1, an AI model with “reason”

OpenAI launches o1, an AI model with “reason”
OpenAI
      launches
      o1,
      an
      AI
      model
      with
      “reason”
-

OpenAI, the creator of ChatGPT, launched o1 on Thursday, a new kind of generative artificial intelligence (AI) model capable of reasoning and answering more complex questions, particularly mathematical ones, hoping to reduce the risk of hallucinations.

“o1 thinks before it responds,” OpenAI said in a statement published online. The company is thus progressing towards its goal of developing a “general” AI, that is, an artificial intelligence but similar to that of humans.

Sam Altman, the head of OpenAI, congratulated his teams at X for this “new paradigm: an AI capable of reasoning in a complex and general way”.

He said, however, that the technology “is still imperfect, still limited, and it seems more impressive the first time you use it than after spending more time with it.”

The beta version of o1 was made available on Thursday, initially for paying ChatGPT users.

Tested by AFP on simple logic questions, o1 achieved the same results as GPT-4o, but by taking more time, and detailing its reasoning more, instead of generating an answer almost instantly.

Another difference is that the new model is currently not capable of processing or generating content other than text.

– Investments –

The launch comes as OpenAI seeks to raise funds that could value it at around $150 billion, making it one of the most valuable private companies in the world, according to US media reports.

Investors include Microsoft and chip giant Nvidia. Other names that have been circulating in the press include Apple, which is already using the startup’s technology in its new generative AI system, private equity firm Thrive Capital, and MGX, an investment fund backed by the United Arab Emirates.

With ChatGPT, OpenAI launched the wave of generative AI (production of content based on a simple query in everyday language) at the end of 2022 and became the star of Silicon Valley.

Since then, from its main investor Microsoft to Google and Meta (Facebook, Instagram), all the major technology groups have been competing with tools that are supposed to help humans in their daily lives, from writing messages to education and artistic creation.

But these “AI assistants” remain machines that predict – very convincingly – sequences of words or arrangements of pixels, without consciousness and therefore without understanding their sentences or their images.

Trained on mountains of data collected online, they can hallucinate (invent facts) and struggle to solve simple mathematical problems.

– “Correct your mistakes” –

OpenAI tested its new model on solving mathematical problems or producing lines of code, and “in many benchmark tests that require strong reasoning ability, o1 rivals the performance of human experts,” the company said.

In a math competition for American high school students, o1 placed “among the top 500 students,” the company added.

“Like a human being who might think long and hard before answering a difficult question, o1 uses a series of thoughts,” OpenAI explains.

“He learns to recognize and correct his mistakes. He learns to break down tricky steps into simpler ones. He learns to try a different approach when the current approach doesn’t work.”

While the new model “hallucinates less,” “we can’t say we’ve solved hallucinations,” admitted Jerry Tworek, an OpenAI researcher interviewed by The Verge.

Another problem with existing models is that they function as black boxes for users.

The startup believes that the new model represents an improvement in terms of security and alignment with human values, because its reasoning becomes “readable” and it better applies security rules.

go/mention

-

PREV In Spin Boldak, gold recycling on hold
NEXT nine first novels to get your teeth into