OpenAI's efforts to develop its GPT-5 model are encountering difficulties and delays.
Tl;dr
- OpenAI's GPT-5 is experiencing delays and high costs, not yet meeting expectations.
- Despite improvements, its performance remains insufficient to justify the necessary investment.
- OpenAI uses innovative strategies, such as generating synthetic data, to compensate for model difficulties.
Delays in development
The development of GPT-5, nicknamed Orion, is lagging significantly behind initial forecasts. While OpenAI planned to make this model a major breakthrough, the results achieved so far do not yet justify the high costs associated with its creation. After 18 months of work, the progress made does not seem to meet expectations, which raises questions about the future of this model.
High training costs
OpenAI undertook several rounds of training for GPT-5, two of which were particularly expensive and slow. In the first round, training took longer than expected, suggesting that future iterations could become even more time-consuming and expensive. This slowness in the training process is an indicator of the major technical challenge that the development of GPT-5 represents, and calls into question the viability of this model in the face of such significant costs.
Improved but limited capabilities
Although GPT-5 is capable of delivering superior performance to its predecessors, its advancements remain insufficient to justify the enormous financial investment required to make it work. Comparison with previous versions shows a slight improvement, but the performance gap is not as significant as expected. This raises doubts about whether GPT-5 will revolutionize artificial intelligence in the same way that previous models have done.
Innovative strategies to compensate for the gaps
In order to improve the performance of GPT-5, OpenAI has adopted several innovative strategies, such as creating new data. Instead of being limited to public data or licensing agreements, the company recruited experts to generate specific data, such as writing codes or solving complex math problems. Additionally, it uses synthetic data generated by another model, o1, to diversify the training sources and speed up the process.
Tech
Related News :