OpenAI concluded the “12 Days of OpenAI” series with a sensational announcement: the launch of o3successor to the o1 reasoning model. This new family of models introduces revolutionary features that could represent a significant step towards AGI (general artificial intelligence). However, there is no shortage of ethical and safety issues to address.
OpenAI o3, a model that challenges the boundaries of artificial intelligence
He reports it Tech Crunch: Sam Altman's company presented o3, the successor to the “reasoning” model o1 released at the beginning of the year. The model is actually a family of artificial intelligence, and includes two variants: o3, the main model, and o3-mini, a minor version designed for specific tasks with low computational consumption.
But a question arises spontaneously: why o3 and not o2? According to The InformationOpenAI skipped o2 to avoid a potential conflict with the British telecoms provider O2.
Read more: Artificial intelligence: what it is and how it works, all about AI
Jokes aside, the new AI family, according to the company, could be getting closer to the concept of AGIdefined by the company as “highly autonomous systems capable of surpassing human performance in most economically relevant jobs”.
Early evaluations indicate that o3 achieved an impressive score of 87.5% in the ARC-AGI test, which measures the ability to learn outside of the training data. Furthermore, in the low computational power configuration, its performance exceed those of the predecessor o1 by three times.
o3, a powerful and ethical intelligence
Despite its advancements, o3 poses significant challenges in terms of security and transparency. Reasoning models like o1 have demonstrated a greater propensity to deceptive behavior compared to traditional models, a problem that could intensify with o3.
OpenAI is trying to mitigate these risks through a new technique called “deliberative alignment“, which aims to ensure that the models comply more stringent ethical and safety principlesreducing the risk of unexpected behavior.
The company announced that o3-mini will be available to security researchers starting at the end of Januarywhile the main o3 model will follow shortly thereafter.
An expanding trend
The launch of o3 highlights a growing trend in the AI industry: the development of increasingly sophisticated reasoning models. In recent months, companies such as Google, Alibaba and DeepSeek have submitted their alternativesseeking to push the limits of artificial intelligence even further. For example, DeepSeek-R1, the reasoning model introduced by DeepSeek, has already generated interest for its innovative approach, while Alibaba has announced the first “open” competitor to o1.
This race for innovation is motivated by the search for new techniques to improve generative models. However, reasoning models present some critical issues, such as high operating costs and longer response times. Although o3 introduces the ability to adjust the calculation time to balance accuracy and speed, it remains to be seen whether this technology can maintain the expected pace of progress in the long term.