In an interview given to BBC Radio 4the researcher did not mince his words: “There is a 10-20% chance that artificial intelligence will lead to the extinction of humanity within the next 30 years.” This alarmist prognosis is not a first. Already in 2023, he warned that, without regulation, AI presented a one in ten chance of destroying the human species. He also resigned from Google that year, with the aim of “be able to express yourself freely about the risks of AI”.
Competition as a risk factor
For the Nobel Prize winner, the need for international regulation is crucial. “Never in history has a person been able to control something more intelligent than themselves.”he recalled. And to compare humanity to a three-year-old child facing a super-powerful AI. “When faced with a three-year-old, we are in control. But with AI, we are the three-year-old.”
The danger, he says, lies in the fierce competition between large technology companies, ready to put profits ahead of ethical concerns. Without a strict legislative framework, these players could develop AI systems beyond any control.
Augmented soldiers, armed drones, autonomous weapons: how artificial intelligence is transforming the battlefields
Currently, the regulatory regime for AI is almost non-existent. Apart from the courageous European AI Act, no national or international framework imposes binding limits. Companies therefore self-police on ethical grounds, but this dynamic is unstable. The worst that could happen is that a company might decide to abandon these principles to catch up with a less scrupulous competitor.
According to Geoffrey Hinton, salvation therefore requires firm regulation, imposed by governments and focused on the safety of systems. If humanity manages to regain control, AI could then become a powerful ally, capable of solving the great challenges of our time and the redundant tasks of everyday life.