It’s panic! A group of specialists, made up of the two founding fathers of artificial intelligence, had an emergency meeting to talk about the catastrophic risks of AI.
This is not the first time that we have heard specialists talk about the dangers of artificial intelligence. But this time, it is the fathers of artificial intelligence who are worried about the “catastrophic risks” of this technology. To prevent AI from escaping human control, these scientists propose a regulatory framework, equipped with models registered in a register.
The fathers of AI think that AI could surpass human intelligence?
Specialists in the field of artificial intelligence have different opinions regarding the risks inherent in this technology. While some think that AI is just a relatively harmless tool, whose sole mission is to assist humans, others fear that it could become beyond human control.
Among those who are concerned about the negative consequences of this technology are: Geoffrey Hinton et Yoshua Bengio. These two specialists are considered founding fathers of artificial intelligence.
Remember that they have received the Turing Prize in 2018 for their actions on deep learning, with the Frenchman Yann LeCun. Not long ago, these two men participated in the third Idais meeting (International Dialogues on AI Security) which took place in Venice. Note that this is one of the major events that brings together different specialists in the sector.
This group of specialists published a press release which mainly talks about the rrisks relating to the arrival of an AI capable of surpassing human intelligence. This could happen immediately, or at the latest within a few decades.
A regulatory framework offering three solutions
The press release also signed by Zhang Ya-Qin, former president of the Chinese giant Baidu, and Stuart Russell, proposes three proposals which make it possible to avoid catastrophic scenarios.
To begin with, These AI experts suggest recording AI models. The international community should put in place agreements and institutions to register these models and report any incidents.
Next, all AI developers must be able to demonstrate the safety of their models if their abilities ever exceed certain thresholds. Finally, it will be necessary undertake a independent global research into AI security. This study should make it possible to examine the validity of security claims made by developers.
Note, however, that the press release does not specify what could constitute a risk or what thresholds should be put in place.
Remember that this is not the first time that Geoffrey Hinton has spoken on this subject. In 2003, he resigned from Google to be able to freely share their apprehensions about the development of artificial intelligence.
And you, what is your opinion? Do you think AI could actually surpass human intelligence? Feel free to write in the comments.
Share the article:
Facebook
LinkedIn
Our blog is powered by readers. When you purchase through links on our site, we may earn an affiliate commission.