An experiment and a recent publication from Apollo Research have just thrown even more confusion around these subjects. Researchers conducted experiments with the latest versions of major players' AIs. A broad goal was initially given. And then instructions were given later to divert the AI from the initial program. Despite these injunctions, the AIs remained focused on preserving the primary objective, and demonstrating a whole series of strategies to this end.
These strategies could go as far as dissimulation and lies. And in one case, when threatened with a shutdown, the AI went so far as to export what it thought were its parameters to another server, in order to ensure its “survival” and the pursuit of its primary objective.
This brings grist to the mill of those who warn of the dangers of an AI that follows orders too blindly, without having either the moral sense or the capacity for perspective necessary to question things. When we think about the circumstances in history where blind obedience has led to disasters, we wish at the very least that this type of research would dig deeper into the issue.
Artificial intelligence and digital transformation of businesses: Belgium is doing well
Concerning the title, the term used in the English articles is “scheming”, which best translates as “to scheme” or even “to plot”.