web search in ChatGPT moves into high gear

From December 5 to 18, OpenAI is hosting 12 days of announcements and demos. New AI models and innovative features are expected.

On the principle of the forward calendar, OpenAI has planned a slew of new features and demonstrations from December 5 to 18. The JDN reveals to you day after day all the announcements to remember.

D8: ChatGPT Search improves again and arrives in Voice Mode

Launched two months ago for paid subscribers, ChatGPT Search allows AI to search the internet to provide up-to-date answers. Three major improvements are announced: faster, mobile-optimized search with a new maps experience, integration of search into advanced voice mode, and global deployment across all platforms.

Paid and free users can now perform natural conversational searches, access multimedia content natively (images, videos), and use ChatGPT as the default search engine in their browser. OpenAI has already announced that D9 will be dedicated to developers with a “mini dev day” which promises its share of exciting new features.

D7: OpenAI introduces projects into ChatGPT

On D7 of its Shipmas, OpenAI introduces a new functionality, long awaited, in ChatGPT: projects. This new function allows users to create spaces dedicated to their different projects, where they can upload files, define personalized prompts and group together all conversations related to the same subject. The integration is complete with existing features like search and Canvas mode. Users can also opt for simpler use as a folder system to organize their discussions.

The tool has already proven itself internally, particularly for managing personal projects such as home maintenance or web development. Deployed today for subscribers, outside Europe, Plus, Pro and Teams, the functionality will gradually be extended to free users, before reaching the Enterprise and Education versions in early 2025. An update which should considerably improve the user experience by finally providing a solution to the sometimes chaotic management of conversations in ChatGPT.

D6: ChatGPT switches to North Pole time

On the sixth day of its announcements, OpenAI adopts a decidedly festive tone by unveiling a unique voice experience with Santa Claus. All ChatGPT users can now converse directly with the famous character using Voice Mode. Accessible via a simple snowflake icon in the interface, this new voice allows for varied exchanges, from discussing life at the North Pole to sharing gift lists, including heartwarming stories about reindeer. Users can even create personalized voice messages from Santa.

A temporary feature, which will not be saved in conversation history. Everything is accompanied by a complete guide to using OpenAI tools for the holidays: from help planning gifts with DALL-E to cooking advice in Voice Mode, including creating family games personalized. A nice communication move. Finally OpenAI also announces the arrival of live vision in its Voice Mode. Screen or camera sharing, the user can choose the feed they share live with the AI.

D5: Apple Intelligence is arriving gradually

For its fifth day, OpenAI takes stock of its previously announced partnership with Apple Intelligence (generative AI on Apple devices), which is starting to materialize. The integration of ChatGPT into the heart of the Apple ecosystem is gradually being implemented, with a first phase on macOS for users with the latest version of the operating system. The assistant can now remember the context of current processes, providing a more consistent experience.

For iOS users within the EU, you will have to wait until April 2024 to benefit from these features. The demonstration notably highlighted the creation of personalized playlists – although direct opening in the application is not yet operational – as well as future integration with the camera button of the iPhone 16 for direct access to ChatGPT Vision .

​​J4: Canvas mode expands and becomes smarter

For the fourth day of its “Shipmas”, OpenAI radically transforms son mode Canvas into a true automated developer assistant. The feature, now available to paid and free users, is no longer just a visual interface: it becomes a complete development environment where AI can search for data, generate code, execute it and visualize the results directly in the console.

Canvas is now positioned as a development co-pilot capable of supporting developers and data scientists throughout their workflow. OpenAI also extends this capability to custom GPTs. An update that moves even further towards intelligent automation of development and data analysis tasks.

D3: Sora available for ChatGPT Plus and Pro users

OpenAI rolls out its much-anticipated Sora AI video generatorten months after its initial announcement (Read our article Video generation: 10 months after its announcement, OpenAI’s Sora is (finally) here). The service is available to ChatGPT Plus and Pro subscribers in the United States, with a new dedicated interface and a faster Turbo model. Users can generate videos up to 20 seconds in different resolutions (480p for Plus, 1080p for Pro) from text, images or existing videos.

The technology excels in camera movements and scene coherence, despite some limitations on the physics of objects. OpenAI emphasizes security with anti-deepfake filters, systematic C2PA watermarking and restrictions on imitating artist styles. Faced with massive demand, OpenAI is currently limiting access to the model.

D2: the arrival of reinforcement fine-tuning for developers

For its second day of announcements, OpenAI is targeting businesses with a major innovation: le reinforcement fine-tuning (RFT). The technique allows developers to adapt OpenAI models to complex tasks using their own datasets and evaluation criteria. The technology, which will be accessible via an API in alpha version in early 2025, particularly targets the fields of law, health, finance and engineering. The goal is to obtain expert models capable of providing highly accurate answers in specialized domains where objectively correct answers exist.

J1: a new subscription and the final version of o1

For the first day, OpenAI hits hard and presents the final version of o1his reasoning model. This final version, which succeeds o1-preview launched last September, marks a significant advance in terms of performance in benchmarks. The model, available in the paid version of ChatGPT, now achieves a 78% success rate on AIME 2024 competitive math problems, and rises to the 89th percentile on Codeforces programming challenges.

In the process, the company unveils ChatGPT Proa new $200 monthly offer aimed at professionals and researchers needing increased computing power. The subscription provides access to the entire OpenAI ecosystem (o1, o1-mini, GPT-4o, Advanced Voice) and introduces o1’s “pro mode”, an optimized version mobilizing more computational resources for tasks the most complex.

-

-

PREV The sinking of two Russian tankers creates a risk of an oil spill
NEXT Ukraine created a drone “completely invisible” to the Russians