DayFR Euro

Decryption | AI predictions are all wrong (especially yours)

It’s the next nuclear bomb. She will steal our jobs. It threatens democracy and privacy. It will collapse finance. Or: it will make us superhuman. It will cure cancer. It will make everyone richer. All of these predictions have been made about artificial intelligence. Who’s telling the truth?


Posted at 1:45 a.m.

Updated at 8:00 a.m.

Probably no one. Like most rave or catastrophic exclamations about a new technology. Nuclear power did not give birth to the hydrogen society nor did it bring about the mutual assured destruction of the planetary superpowers. Cryptocurrencies have not brought down the global banking system and have not ruined all of their most fervent followers.

These extreme scenarios are problematic when they distort our perception of reality. For example: even knowing that they are three times less likely to have an accident in an autonomous car, four times less likely to be injured and twenty times less likely to be the cause of ‘an accident, two thirds of the population would refuse to board an autonomous taxi if the opportunity presented itself.

For what ? We fear the worst-case scenario. Self-driving cars, a form of artificial intelligence (AI), are not perfect. But it makes accidents less likely.

How to predict the future?

We often make a mistake in anticipating extreme scenarios. Experts too. Maybe more than the others.

“How to predict the future? Many experts try, but over time you see that they are no better than others. For what ? They tend to exaggerate the extremes – both the risks and the benefits. »

It’s Hugo Larochelle who says it. He doesn’t fall from the clouds. He is one of the greatest specialists in AI in Canada1. He is an associate professor at the University of Montreal and holds a chair in AI from the Canadian Institute for Advanced Research (CIFAR). He is also a researcher for Google DeepMind and associated with Mila, the Montreal AI research institute.

In addition to numbers and algorithms, Hugo Larochelle also enjoys reading. He borrows his statement about experts making incorrect predictions from Philip E. Tetlock and Dan Gardner. The two authors published a book in 2015 entitled Superforecaster : The Art and Science of Predictionwhich reports on a research project created in 2011 by three University of Pennsylvania professors, including Philip E. Tetlock himself, called the Good Judgment Project.

This project has, so to speak, proven using scientific tools that “the wisdom of crowds” is often superior to that of experts. Or at least, that a group of people who are not necessarily experts on a very specific subject could most of the time make better predictions than experts on the subject in question.

PHOTO ALAIN ROBERGE, LA PRESSE ARCHIVES

AI expert Hugo Larochelle

This is a bit like what we are currently seeing in artificial intelligence. Experts have extreme opposing views, and because they are experts, they then have difficulty changing their minds and agreeing with each other.

Hugo Larochelle, AI expert

Panacea or apocalypse?

A year and a half ago, an open letter was published calling for a pause on tech giants’ development of certain forms of artificial intelligence2. This letter was signed by more than a thousand technology experts from all walks of life, including Tesla big boss Elon Musk and Montreal researcher Yoshua Bengio.

The rapid, sudden and “out of control” emergence of technologies like the generative AI that powers OpenAI’s ChatGPT “represents a profound risk to society and humanity,” they warned. They compared the risk of AI to that of nuclear power or a pandemic, and feared its impact on all levels. In the context of the presidential campaign in the United States, some expected to see an avalanche of fake videos and falsehoods generated by this AI for the benefit of states opposed to Uncle Sam.

At the same time, the financial company Goldman Sachs announced that, according to it, 300 million jobs and a significant percentage of global GDP were threatened by AI.

A year and a half later, the disaster scenarios have not happened. We still haven’t cured cancer. But AI is driving changes in the daily lives of millions of workers in healthcare, finance and elsewhere.

Faked photos of Donald Trump and Kamala Harris are circulating on social media. It should be taken seriously, but it is not the massacre predicted.

Seen as a whole, AI seems to be a huge entity with very opaque mechanisms that will overturn everything in its path. The reality is that it is perhaps smaller than that, says Hugo Larochelle. “It’s easier to understand AI by seeing it as a package of new tools,” he illustrates. For example, generative AIs are not intelligent entities, they create simulated content. »

By forcing these AIs to “sign” their creations, we will know which videos and which photos are fakes, explains Mr. Larochelle. “And we already have laws and rules for many other AI applications, to protect us from risks. »

In other words, by taking it one spoonful at a time, the mountain suddenly looks smaller…and the extremist predictions suddenly seem more crazy than anything else.

1. Read “Interview with Hugo Larochelle of Google DeepMind: More Intelligence for More Solutions”

2. Read “Artificial intelligence: Musk, Bengio and a thousand experts call for a six-month break”

-

Related News :