As the holiday season arrives, many businesses are looking for ways to take advantage of it through deals, promotions or other campaigns. OpenAI found a way to participate with its event called “12 Days of OpenAI”.
On Wednesday, OpenAI announced on
Also read: OpenAI’s Sora video generator is here – how to try it
Here’s everything you need to know about the campaign, plus a recap of what’s new each day.
What is the ’12 Days of OpenAI’?
OpenAI CEO Sam Altman shared more details about this event which began at 10 a.m. PT on December 5 and will run every weekday for 12 days. Each live stream will feature a launch or demo, with “major” new features or “little surprises,” according to Altman.
What new features have been announced so far?
Wednesday December 11
Apple released iOS 18.2 today. This launch includes integrations with ChatGPT across Siri, Writing Tools and Visual Intelligence. As a result, today’s livestream focused on this integration.
- Siri can now recognize when you ask questions that are beyond its scope and could be better answered by ChatGPT. In these cases, it will ask you if you want to process the request via ChatGPT. Before any request is sent to ChatGPT, a message informing the user and requesting permission will always appear, putting control in the users’ hands.
- Visual Intelligence refers to a new feature for the iPhone 16 line that allows users, by pressing the camera control button, to point their camera at something in order to search the web with Google, or to search the web with Google. ‘use ChatGPT to learn more about what they’re watching, or to accomplish other tasks such as translating or summarizing text.
- Writing Tools now has a new “Compose” tool, allowing users to create text from scratch using ChatGPT. With this feature, it is even possible to generate images using DALL-E.
All of these features are subject to ChatGPT’s daily usage limits, as is the case for users of the free version of this template. Users can choose whether or not to enable ChatGPT integration in Settings.
Tuesday December 10
- Canvas will be open to all web users, regardless of their plan, in GPT-4o, meaning it is no longer only available in beta for ChatGPT Plus users.
- Canvas has been natively integrated into GPT-4o, allowing direct access to it without having to go through the model selector.
- The Canvas interface is the same as that seen in beta in ChatGPT Plus, with a table on the left showing the exchange of questions/answers and a tab on the right allowing you to visualize your project, showing all the modifications as they are made. as well as shortcuts.
- Canvas can also be used with custom GPTs, it is enabled by default when creating a new template, and it is possible to add Canvas to existing GPTs.
- Canvas also has the ability to run Python code directly, allowing ChatGPT to perform coding tasks such as bug fixing.
Monday December 9
OpenAI teased the day three announcement by describing it as “something you’ve been waiting for,” followed by the highly anticipated presentation of its video model — Sora. Here’s what you need to know:
- Nicknamed Sora Turbo, the video model is more efficient than the model presented last February.
- Access will be available in the United States later today; users will only need ChatGPT Plus or Pro.
- Sora can generate videos from other videos, text to video, etc.
- ChatGPT Plus users can generate up to 50 videos per month at 480p resolution or lower than 720p videos. The Pro Plan offers 10x more usage.
- The new model is smarter and less expensive than the model presented in February.
- Sora has an explore page where users can view the creations of others. Users can click on any video to see how it was created.
- A live demonstration showed the model in action. Users entered a query and chose aspect ratio, duration, and even presets. I found the results of the demonstration video realistic and impressive.
- OpenAI also unveiled Storyboard, a tool that allows users to generate input for each frame in a sequence.
Friday December 6:
During the second day of shipmas, OpenAI expanded access to its Reinforcement Tuning Research Program:
- The Reinforcement Tuning program allows developers and machine learning engineers to fine-tune OpenAI’s models to “excel at specific sets of complex, domain-specific tasks,” according to OpenAI.
- Reinforcement Tuning refers to a customization technique in which developers can define the behavior of a model by entering tasks and evaluating the results. The model then uses this feedback as a guide to improve, becoming better at reasoning about similar problems and improving its overall accuracy.
- OpenAI encourages research institutes, universities and companies to apply for the program, particularly those performing specific complex tasks that could benefit from AI assistance, and for tasks with an objectively correct answer.
- Places are limited; Interested candidates can apply by filling out this form.
- OpenAI plans to make Reinforcement Tuning available to the public in early 2025.
Thursday December 5:
OpenAI is off to a strong start by rolling out two major updates to its chatbot: a new subscription tier to ChatGPT, ChatGPT Pro, and the full version of the company’s o1 model.
The full version of o1:
- Would perform better for all types of queries, beyond math and science
- Will reduce major errors by approximately 34% compared to o1 trial, while thinking 50% faster
- Is rolling out today, replacing o1-preview for all ChatGPT Plus and Pro users
- Allows users to import images, as demonstrated, to provide multimodal reasoning (reasoning about text and images)
ChatGPT Pro :
- Is aimed at ChatGPT Plus super users, giving them unlimited access to the best of OpenAI, including unlimited access to OpenAI o1-mini, GPT-4o, and Advanced Mode
- Contains an o1 pro mode, which uses more computing resources to solve the most difficult science and math problems
- Costs $200 per month
Where to access the livestream?
Live broadcasts are held on OpenAI’s website and published on its YouTube channel immediately afterwards. For ease of access, OpenAI will also provide a link to the live stream on its X account ten minutes before start time, approximately 10am PT/1pm ET each day.
What are your expectations?
The announcements remain a surprise, but many anticipate that Sora, OpenAI’s video model initially announced in February, will launch as part of one of the big reveals. Since that initial announcement, the model has been made available to a select group of testers and was leaked last week by some of them due to grievances over “unpaid work,” according to reports.
Also read: OpenAI errors with the o1 model are more common than those with any other major AI model. Why this is important
Other rumored launches include a more comprehensive version of the company’s o1 model with advanced reasoning capabilities, as well as a Santa voice for OpenAI’s Advanced Voice Mode, according to a identified code by users a few weeks ago under the codename “Straw”.
We are observing an important turning point in the field of artificial intelligence, where OpenAI is positioning itself as a key player. Recent developments, notably the integration of ChatGPT into consumer products like iOS, mark a significant step towards widespread and democratized use of AI. This, however, raises ethical questions about the management of data and user preferences, which will need to be handled carefully. A balance between innovation and responsibility will be essential to guide this technological transformation.
Our editors use AI to help them deliver fresh items from trusted sources to our users. If you find any image or content that is inappropriate, please contact us via DMCA form and we will promptly remove it. / Our editors use AI to help them offer our readers fresh articles from reliable sources. If you find an image or content inappropriate, please contact us via the DMCA form and we’ll remove it promptly.
Canada