ChatGPT and others: faster and less reliable

ChatGPT and others: faster and less reliable
ChatGPT and others: faster and less reliable

Those who claim that over time, ChatGPT and its ilk will write fewer falsehoods, had better be patient: the most recent tests reveal that with two different “learning” methods, these “conversational agents” make even more mistakes than before when asked simple questions.

The two methods in question are, in the world of AI developers, to “train” these “robots” with more data and more computing power, or to get them to “adjust” (fine-tuning) in response to human feedback.

However, a team from the Polytechnic University of Valencia, in Spain, tested the two methods on ChatGPT from the OpenAI company, LLaMA from the Meta company, and BLOOM, from the BigScience company. The result: these “broad language models,” as they are called, become better at answering complicated questions, like solving a long anagram. But they are less good at questions as simple as addition.

The study was published on September 25 in the journal Natureunder the clear title Larger and more instructable language models become less reliable.

The consequence is that with one or the other of the two learning methods, the “capacity” of these robots to say falsehoods increases. And it goes without saying that machines do not realize this, the proof being that they are not able to avoid answering a question when they do not know the answer. They are also not capable of sending the human who asked them a question a warning such as “be careful, I may have made a mistake”.

In other words, humility is not part of their programming.

It is this same observation that led earlier this year a trio of researchers in philosophy and social sciences to propose the term “bullshit” (bullshit, rubbish or stupidity) to designate the propensity of these AI to say anything (AI developers instead use the term “hallucinations”, which is criticized because it tends too much to “humanize” the machine).

Humans would therefore be well advised, warn the Spanish researchers, not to trust the AI’s responses, however impressive they may be. In the immediate future, AI seems doomed to continue to produce falsehoods, and experts have no solution in sight.

Subscribe to our sprawling newsletter

Encourage us for the price of a coffee

-

-

PREV The best laptops on sale to grab with up to €100 immediate discount – LaptopSpirit
NEXT “Mission Namibia: mysteries and excavations of artisanal mines”: exhibition system proposed by CCST students ⛏️ | ECHOSCIENCES