Artificial intelligence? It’s still an illusion, says Apple

Artificial intelligence? It’s still an illusion, says Apple
Artificial intelligence? It’s still an illusion, says Apple

Researchers at Apple, the computer giant, say they have discovered evidence in tests that the supposedly intelligent responses provided by large AI-based language models are largely illusions.

In their results uploaded to the arXiv prepublication server, these specialists maintain that after testing several of these models, they found that they are not capable of performing tasks requiring real reasoning.

Over the past few years, the researchers point out, several language models, such as ChatGPT, have progressed to such an extent that many users have begun to wonder if this software is truly endowed with intelligence.

As part of this new research work, Apple specialists looked into this question, believing that the answer lies in the ability of an intelligent being or machine to understand the nuances present in various simple situations, which require logical reasoning.

One of these nuances is the ability to separate relevant information from irrelevant information. If a child asks a parent how many apples are in a bag, but finds that many of them are too small to eat, the child and parent understand that the size of the fruit has nothing to do with it. with their number. This is because both people possess logical reasoning skills.

And so, in their study, Apple researchers subjected several language models to tests to determine whether computer programs really understood what was asked of them, by indirectly requiring them to ignore irrelevant information.

To do this, they asked several hundred questions to various models, questions that had already been used to evaluate the capabilities of these same software – but the researchers also included some superfluous data. And this, they found, was enough to cause confusion within the language models, causing them to give erroneous, or even completely outlandish, answers to questions for which they previously provided correct answers.

This, the researchers believe, demonstrates that AI software does not really understand what is being asked of it. Rather, they recognize the structure of a sentence and provide a response based on what they have learned via machine learning algorithms.

Apple specialists further note that most of the language models tested often respond with information that may seem correct, but upon closer observation, we see that these are rather errors, as when we ask the AI ​​how it “feels” about something, and it responds in such a way as to suggest that it is capable of such behavior.

Subscribe to our sprawling newsletter

Encourage us for the price of a coffee

-

-

PREV The luxury of listening without compromise at Boulanger
NEXT The latest 11″ iPad Pro boosted by the M4 chip loses more than €300, but only today