⇧ [VIDÉO] You might also like this partner content
Geoffrey Hinton, an emblematic researcher considered one of the “founding fathers” of artificial intelligence – alongside Yann Le Cun and Yoshua Bengio in particular – has recently intensified his warnings about the risks that artificial intelligence represents. In a previous statement, he estimated a 10 to 20 percent chance that AI would cause human extinction within 30 years. This forecast, although speculative, already illustrated the extent of his concerns regarding the rapid evolution of this technology, he explained in a recent interview in which he adjusted this estimate to 10-20 years.
Hinton has been emphasizing for years that advances in AI are surpassing initial predictions and bringing the emergence of artificial general intelligence (AGI) unexpectedly closer. This type of AI, ultimately capable of surpassing human cognitive abilities in almost all areas, could, according to him, make machines uncontrollable. However, it must be emphasized that this hypothesis appears to be nothing more than an alarmist restatement of opinions — however solidly based on his very advanced understanding of AI systems.
Recently, when asked by a presenter on Radio 4's Today (BBC) if anything had changed from his 2023 estimate (about the risk of humanity becoming extinct due to A.I. ), he replied: “ Not really. I think in 10 to 20 years, possibly. We've never had to face things smarter than us ».
He then compared this situation to the industrial revolution, while noting that this time the consequences could be much more profound and potentially dangerous for humanity: “ These things (AI) are smarter than us. During the industrial revolution, there was no risk of machines taking the place of men, because they were 'just' stronger. We remained in control because we had the intelligence. Today there is a threat that these things could take control, so it's very different ».
A turning point in Hinton's career
In May 2023, Hinton left his job at Google so he could speak freely about the dangers of AI, without risking compromising his former employer, he explained in 2023 to the
BBC. In a public statement, he said he partially regretted his involvement in the development of AI: “ Part of me regrets my lifelong worke ».
In the face of these threats, Hinton advocates for increased government regulation and increased oversight of developments in the AI sector. According to him, the economic motivations of large companies are not enough to guarantee the secure use of these technologies. He argues that only firm policy intervention could force players in the field to invest in AI safety research. “ The only thing that can force these big companies to do more research on security is government regulation “, he confides to Radio 4. “ So I strongly believe that governments need to force large companies to conduct in-depth security research ».
Of course, Hinton's pessimism is not shared by all artificial intelligence experts. Yann Le Cun, another major figure in the field (who like him received the 2018 Turing Prize for his work on deep learning), defended the idea that AI could, on the contrary, play a key role in solving the major challenges facing humanity. For him, the emphasis should be on mastering AI rather than concerns about its future applications.
Between hope and fear: a fractured debate
Other experts call for balancing concerns. They believe that current dangers, such as disinformation, systemic bias or the malicious use of AI, must remain a priority in debates. A declaration signed in May 2023 by hundreds of specialists nevertheless affirms that “reducing the risk of extinction linked to AI should be a global priority in the same way as pandemics or nuclear war”.
See also
Hinton's warnings are compelling on several levels. They are not just a hypothetical fear of the future, but point to the very real dilemmas that humanity faces when it comes to control and technological ethics. The question is not only whether machines will surpass humans, but how our society can anticipate the impacts on its social, economic and environmental structures. “ I fear that although it leads to a huge increase in productivity, which should be good for society, it will end up being very bad for society if all the profits go to the rich and a lot of people lose their jobs and become poorer “, he added.
Rather than giving in to fascination with dystopian scenarios or sinking into naive optimism, it is better to encourage open dialogue, other experts say. AI, despite being a human-made tool, could forever change the nature of human decision-making, creativity, and even responsibility.
In this period of rapid change, however, Hinton's warnings are a reminder that technology is not enough in itself: its direction and goals depend on the values we choose to uphold. Rather than an insurmountable obstacle, AI could be seen as the “ultimate test” of our ability to act collectively for a sustainable future.