In its latest annual report, the US Congressional Commission on the Evaluation of Economic and Strategic Relations between the United States and China recommends the launch and financing of a “project similar to the Manhattan Project, dedicated to the development and acquisition of artificial general intelligence (AGI)”.
The analogy with the nuclear bomb is not trivial. The atomic bomb upset the geopolitical balance. In the same way, AGI could confer hegemonic power on those who master it, but this time going well beyond the military framework.
Also read:
The 100 who make AI in Europe
If “knowledge is power” as Francis Bacon said, and knowledge comes from intelligence, then artificial general intelligence is power raised to a new scale. We therefore do not want to be the last to develop it: yes, we need a Manhattan AI project.
Simplistic view. But the analogy with the nuclear bomb also has its limits. Unlike all previous technologies, AI is the first to have a form of autonomy in its decisions. She doesn't just carry out orders; she interprets the objectives set and chooses the means to achieve them, often in an unpredictable manner. To think that AGI would be a “neutral” tool that simply needs to be put in the hands of “good” governments to maximize the benefits for humanity is a simplistic vision.
Regulating the actions of AI turns out to be more complex than it seems. AIs already demonstrate incredible autonomy in the means they use to achieve the objectives set for them. CICERO, for example, an AI trained in the game Diplomacy, has engaged in blatant lies, deal-breaking, and deliberate deception, despite developers' efforts to prevent it from resorting to such practices.
Also read:
When AIs make up
As Yuval Noah Harari explains in his latest book Nexus“when we set a specific goal for computers […]they mobilize all their power and ingenuity to achieve this objective. Since they function very differently from humans, there is a good chance that they will resort to methods that humans will not have anticipated.
This is how social network algorithms highlight hateful content: in order to maximize the time spent on the platform (objective set by humans), the algorithms decide to propagate violence and extremism (interpretation by the machine). Such an “alignment” problem between AI methods and human intentions can lead to disasters like the Christchurch terrorist attack that left 51 people dead, with the perpetrator describing in his manifesto how he radicalized in part thanks to content found on YouTube. Who knows how a future AGI will interpret human requests? Power is not consciousness.
Ultimate risk. The ultimate risk is that one day AI will define its own ends, beyond human control. This scenario may seem dystopian, but the beginnings of this problem are already perceptible. Auto-GPT for example, a GPT-4-based AI tasked with tracking down tax advisors who marketed abusive tax avoidance schemes, went beyond its initial mission by spontaneously alerting the authorities after completing its task.
Also read:
AI, robots: China is revolutionizing the rail of the future
We don't need an AI Manhattan Project, but rather an AI IPCC. The issue is civilizational: history is increasingly written by machines, and no longer men, so it is our ability to control our destiny that is at stake.
A graduate of Sciences Po Grenoble and holder of an MA in philosophy, politics and economics, Guillaume Moukala Same is an economist consultant at Asterès.
Related News :