DayFR Euro

“AI will help accelerate scientific research, much more than we can imagine”

Joëlle Barral is director of fundamental research in artificial intelligence (AI) at Google DeepMind, which, from London to via Silicon Valley, brings together the research teams of the digital giant and the English AI start-up acquired in 2014. A graduate of Polytechnique and the American Stanford University, this engineer has worked at Google since 2004, particularly in its health-related projects, including Verily. She answers about the limits and the future of the major language models on which AI assistants like Gemini rely, which Google will integrate into its search engine and its Android environment. And expresses his faith in the potential of this technology for research, while the general director of DeepMind, Demis Hassabis, and the researcher John Jumper have just been awarded the Nobel Prize in physics.

Where is the development of AI? Some speak of a financial bubble or believe that research on large language models is reaching a plateau…

AI is not new. The neural networks underlying today’s generative AI have existed for more than fifty years. The recent acceleration has been due to the amount of data and computing power that has allowed these algorithms to achieve capabilities that were not at all expected.

Read also | Article reserved for our subscribers “Artificial intelligence is a bubble: there is a gap between the very significant costs and the potential revenues”

Add to your selections

It’s quite rare in the world of research to see such enthusiasm across society for research results. We are truly fortunate to be at the start of a new industrial era, of a revolution. I wouldn’t say at all that we’re reaching a plateau. On the contrary, we are at the beginning of mastering this technology which will have implications in a very large number of human activities.

Despite this progress, AI assistants continue to make factual errors. Isn’t this an obstacle to their use in search engines? Is this problem solvable?

Indeed, initially, we had “narrow” AI that we trained with a certain number of examples, such as to classify images of cats and dogs, etc. With current generative AI, we no longer show examples: the models learn on large bodies of data and then tell you, on a sequence of words, what the next word is, based on its frequency of appearance in their training data.

To avoid errors, we can first “augment” this software by teaching them to use tools: if we give the AI ​​models a calculator, they will no longer make mistakes when they do a multiplication…

You have 78.64% of this article left to read. The rest is reserved for subscribers.

-

Related News :