DayFR Euro

Meta will use Reuters news information in its Meta AI virtual assistant

Meta, the champion of social platforms (Facebook, Instagram, WhatsApp, Messenger) will use content from the Reuters news agency so that its artificial intelligence chatbot, Meta AI, answers users' questions on current events in real time and current events.


Meta AI is strategic for Meta




Meta AI is strategic in the development of the Meta social network. Meta is pushing for it to be used more and more. Everything is based on Meta's AI approach to large language models (LLM) switched to Open Source with Llama in order to find the support of the developer community.

« We can confirm that Reuters has partnered with technology providers to license our news content«

Meta AI, the company's chatbot, is available across all its services, including Facebook, Whatsapp and Instagram. The social media giant has not revealed whether it plans to use Reuters content to train its Large Language Model (LLM). “ We can confirm that Reuters has partnered with technology providers to license our trusted, fact-based news content to power their AI platforms. The terms of these agreements remain confidential “, a Reuters spokesperson said in a statement. Reuters will be remunerated for this access to its information.

With this partnership with Reuters, “ Meta AI can answer news-related questions with summaries and links to Reuters content », Describes a Meta spokesperson in a press release sent by email. Reuters already has a fact-checking partnership with Meta, which began in 2020.

Agreements between tech companies and media

Other companies, including OpenAI, the creator of ChatGPT, and startup Perplexity, which is developing a next-generation search engine, backed by Amazon founder Jeff Bezos, have entered into similar AI partnerships with news organizations. In , OpenAI has entered into an agreement with Le Monde in order to access its content. OpenAI had been banned by Radio France over access to its content.

Meta estimates that there are now more than 3.2 billion people worldwide using at least one of its platforms

Susan Li, CFO (Chief Financial Officer) of Meta, confirms the importance of the Meta AI virtual assistant during Meta's results presentation on October 30, 2024, alongside Mark Zuckerberg, boss of Meta. The agreement between Meta and Reuters comes as Meta appears particularly powerful as a communications group. Meta estimates that now 3.2 billion people around the world use at least one of its platforms (Facebook, Instagram, WhatsApp, Messenger) every day.

Susan Li, Meta CFO

Meta's CFO demonstrates detailed knowledge of Meta's products and AI. “ Usage of Meta AI continues to grow as we make it available in more countries and languages. We're seeing an increase in its usage as we improve our models and have introduced a number of improvements over the last 6 months to make Meta AI more useful and engaging » she declares.

Meta AI evolves towards multi-modal

Meta AI is evolving towards multi-modal. “ Last month we started introducing Voice, so you can speak with Meta AI more naturally, and it's now fully available in English for people in the US, Australia, Canada, and New Zealand » she said.

« Users can now upload photos to Meta AI to learn more about them »

« In the US, users can now also upload photos to Meta AI to learn more about them, write captions for posts, and add, remove, or edit elements on their images with a simple text prompt [prompt]. All of this is built with our first multimodal base model, Llama 3.2 » she describes.

« We are excited about the progress of Meta AI. It's obviously very early in its journey, but it continues to be on track to become the most widely used AI assistant in the world by the end of the year and it has over 500 million active users per year. month » she claims.

Gathering information and helping with practical tasks

« People use it for many things. Common use cases we see include information gathering, helping with practical tasks, which is the most important use case. But we also see people using it to dig deeper into interests, to search for content on our services, to generate images, that's also another pretty popular use case so far » she presents.

« In the short term, our goal is really to make Meta AI more and more useful for people«

« In the short term, our goal is really to make Meta AI more and more useful to people and if we succeed, we think there will be a wider and broader set of queries that people will use it for, including included more monetizable queries over time » she says.

Meta AI draws on web content to answer user questions and provides sources for these results from partner search engines. “ We've integrated Bing and Google, both of which offer great search experiences. Like other companies, we also train our generative AI models on publicly available content online and crawl the web for a variety of purposes » she continues.

Internal use of AI to accelerate computer coding

Does Meta use its AIs internally? “ For example, it's still early, but we're seeing massive internal adoption of our internal assistant and coding agent, and we're continuing to make Llama more efficient for coding, which should also make this use case of increasingly valuable to developers over time » comments the CFO.

« We're focused on making Meta AI as engaging and helpful as possible.«

Meta also hopes to deploy these AI tools as part of a large part of its content moderation efforts to be more effective at this task. “ Right now, we're really focused on making Meta AI have as engaging and useful a customer experience as possible » she explains.

« Over time, we think there will be an increasingly broad set of queries that people will use it for. And I think monetization opportunities will exist over time as we get there » she thinks. “ But for now, I would say we're really focused on customer experience first and foremost and it's kind of a playbook for us on the products that we bring to market where we really define customer experience before we focus on what monetization could look like » she finished.

Open Source distribution of Meta’s Llama models

The Meta AI approach is based on Meta's artificial intelligence strategy, which calls for the Open Source distribution of its major LlAMA language models so that they can be used and improved by the developer community. The LLAMA range of LLM (Large Language Models) is used for Meta tools, i.e. Meta AI, AI Studio or Business AIs.

“Many independent researchers and developers are working on LlAMA and making improvements”

Open Source helps Meta move forward. “ There are a lot of independent researchers and developers working on LlAMA and they make improvements and then they release them and it becomes very easy for us to integrate that both into Llama and into our Meta products like Meta AI or AI Studio or Business Ais »Welcomes Mark Zuckerberg, boss of Meta, during the presentation of Meta's quarterly results on October 30, 2024.

The manager sees this as an opportunity to improve the efficiency and cost of his AI at a lower cost. “ This material is obviously very expensive. When someone finds a way to make this work better, if they can make it work 20% more efficiently, then that will save us a huge amount of money » he adds.

Exponential growth in the use of Llama

Mark Zuckerberg sees a strong dynamic with Llama. The use of Llama tokens has grown exponentially in 2024. “ And the more Llama is widely adopted and becomes the industry standard, the more improvements in its quality and efficiency will trickle down to all of our products. » he said.

Meta released Llama 3.2, including small models that run on terminals and open source multimodal models

Au 3th Quarter 2024, Meta released Llama 3.2, including small models that run on terminals and multimodal models (which include voice, image and text) Open Source. “ We're working with businesses to make it easier to use, and we're also now working with the public sector to adopt Llama across the U.S. government » he adds.

Meta invested in the training of its model of LLM LLama 4. « We train Llama 4 models on a cluster larger than 100,000 H100 [les H100 sont des processeurs GPU dédiés à l’IA conçus par Nvidia] or bigger than anything I've seen reported on what others are doing » says Mark Zuckerberg.

Progress on small language models

These new LLM models will be available in 2025. “ I expect the smaller Llama 4 models to be ready first, and they will hopefully be ready early next year [2025]and I think they will be something important, on several fronts: new modalities, new capacities, stronger and much faster reasoning » enthuses the manager.

“Open source will be the most profitable, most customizable, most reliable, most efficient and easiest to use option”

He defends his choice of open source for his AI models. “ It's clear to me that open source will be the most cost-effective, customizable, reliable, high-performance, and easiest-to-use option available to developers, and I'm proud that Llama is leading the way in this area » he affirms.

2024 will have been the year of Meta AI, Meta's virtual assistant. “ This year we've really focused on deploying Meta AI as a sort of one-stop assistant that people can ask all their questions to. » indicates the manager. Next year should see plenty of opportunities. “ In my opinion, they will develop further over the next year. [2025] in terms of consumer and professional use cases, for people interacting with a wide variety of different AI agents » he announces.

Every company must be able to implement an AI agent

On the business side, Meta wants to make it so that any small business or enterprise over time can, in just a few clicks, set up an AI agent that can help provide customer service and sales to all of their customers in the whole world.

Meta ended the third quarter of 2024 with 72,400 employees, up 9% from the prior year, with growth primarily driven by hiring in its focus areas of Monetization, Infrastructure, Reality Labs, generative AI, as well as regulation and compliance.

-

Related News :