American scientist John Hopfield, who recently received the Nobel Prize in physics for his pioneering research in the field of artificial intelligence (AI), has issued a worrying warning about the rapid progress of the technology. Hopfield, professor emeritus at Princeton University, expressed concern about the lack of understanding that still surrounds how modern AI systems work and warned of the potential danger they pose if not properly implemented. controlled.
In a video recorded from Britain at a University of New Jersey meeting, the 91-year-old researcher compared advances in AI to other powerful technologies he has witnessed in the throughout his life, such as biological engineering and nuclear physics. Both technologies have had both beneficial and devastating effects. For Hopfield, the key lies in understanding: “As a physicist, it worries me greatly to see something that is not controlled, something that I don’t understand enough to know what limits might be placed on it. technology “.
Today’s AI systems, with their impressive processing power and deep learning, are described by Hopfield as “absolute marvels.” The problem, however, is that despite their advances, scientists still don’t completely understand their inner workings. This lack of understanding calls into question humanity’s ability to control the limits of AI and ensure its uses are ethical and safe.
The development of AI, particularly neural networks and deep learning, has seen explosive growth since the 1980s, thanks to the pioneering work of Hopfield and his colleague Geoffrey Hinton, also a Nobel laureate. Both laid the foundations for today’s AI systems, which have had a huge impact in many fields, from medicine to industry, but have also raised concerns about their future implications.
Hopfield and Hinton, called the “fathers” of AI, have criticized the rapid expansion of the technology they helped create. They worry that AI is evolving faster than scientists can control it, increasing the risk that its development will spiral out of control.
Mr. Hopfield argues, like Mr. Hinton, in favor of intensifying research to better understand the limits and risks of AI, before its capabilities exceed human supervision. As companies and governments compete to master this technology, the need for limits and regulations becomes ever more pressing.
Hopfield’s warning adds to growing global concerns about the impact of AI on society. While this technology promises to revolutionize many aspects of our lives, it also raises fundamental questions about the control, ethics and responsibility of its development.