Autonomous AI represents an existential threat – and it will quickly reach that stage | Threat

Autonomous AI represents an existential threat – and it will quickly reach that stage | Threat
Autonomous AI represents an existential threat – and it will quickly reach that stage | Threat

Former Google CEO Eric Schmidt says autonomous artificial intelligence (AI) is coming and could pose an existential threat to humanity.

“We will soon be able to have computers operating on their own, deciding what they want to do,” said Schmidt, who has long warned about the dangers associated with the benefits of AI for humanity, during an appearance on December 15 on ABC’s “This Week”.

“This is a dangerous point: when the system can improve itself, we must seriously consider unplugging it,” Eric Schmidt continued.

Mr. Schmidt is far from the first tech executive to raise these questions.

The rise of consumer AI products like ChatGPT has been unprecedented over the past two years, with major improvements to the language-based model. Other AI models have become increasingly adept at creating visual artwork, photographs, and full-length videos that, in many cases, are virtually indistinguishable from reality.

For some, this technology is reminiscent of the series Terminatorwhich depicts a dystopian future where AI takes control of the planet, leading to apocalyptic consequences.

Despite all the fears over ChatGPT and other similar platforms, the consumer AI services available today still fall into a category that experts would call “dumb AI.” These AIs are trained from a massive set of data, but have no awareness, sentience, or ability to behave autonomously.

Mr. Schmidt and other experts are not particularly worried about these systems.

Instead, they are concerned about more advanced AI, known in the tech world as “artificial general intelligence” (AGI), describing much more complex AI systems that could have sentience and, by extension, develop conscious motives independent of human interests and potentially dangerous for them.

According to Schmidt, no such system still exists today. We are rapidly moving toward a new type of intermediate AI: one lacking the sentience that would define an AGI, but capable of acting autonomously in areas such as research and weaponry.

“I have been working in this field for 50 years. I have never seen innovation on this scale,” Eric Schmidt said of the rapid evolution of the complexity of AI.

For Mr. Schmidt, more developed AI would have many benefits for humanity, but could also have equally harmful effects, such as [dans le domaine] weapons and cyberattacks.

The challenge

According to Eric Schmidt, the challenge is multiple.

Basically, he repeated a sentiment common to technology leaders: if AGI-style autonomous systems are inevitable, massive cooperation between companies and governments on an international scale will be essential to avoid potentially devastating consequences.

It’s easier said than done. AI offers U.S. competitors, such as China, Russia, and Iran, a potential edge over the United States that would be difficult to obtain otherwise.

Within the technology industry too, there is currently massive competition between large companies – Google, Microsoft and others – to outperform their rivals, a situation that raises inherent risks of inadequate security protocols to deal with a malicious AI, said Eric Schmidt.

“The competition is so fierce that there is a fear that one of the companies will decide to omit the measures [de sécurité] and puts a truly harmful product on the market,” Schmidt said. This damage would only become evident after the fact, he added.

The challenge is even greater on the international stage, where adversary countries are likely to view the new technology as revolutionary for their efforts to challenge U.S. global hegemony and expand their own influence.

“The Chinese are clever and they understand the power of a new type of intelligence for their industrial power, their military power and their surveillance system,” recalled Mr. Schmidt.

It’s something of a dilemma for American leaders in this area, who find themselves forced to balance existential concerns for humanity with the risk that the United States will fall behind its adversaries, which could s prove catastrophic for global stability.

In the worst case, these systems could be used to produce crippling biological and nuclear weapons, including by terrorist groups like ISIS.

This is why, according to Mr. Schmidt, it is absolutely crucial that the United States continues to innovate in this area and, ultimately, maintains its technological dominance over China and other states and groups opponents.

Industry leaders demand regulation

Regulation in this area remains insufficient, underlined Mr Schmidt. But he expects government attention to improving safeguards around the technology to accelerate significantly in the coming years.

Asked by presenter George Stephanopoulos whether governments were doing enough to regulate the sector, Eric Schmidt replied: “Not yet, but they will, because they have to.”

Despite initial interest in the area – hearings, legislative proposals and other initiatives – emerging over the current 118e Congress, this session appears poised to end without any major AI-related legislation.

President-elect Donald Trump, for his part, has warned of the considerable risks posed by AI, saying during an appearance on Logan Paul’s “Impaulsive” podcast that it is a “tool truly powerful.”

He also spoke of the need to maintain competitiveness against adversaries.

“Difficulties arise from this, but we must be at the forefront,” Donald Trump said. “It’s going to happen, and if it’s going to happen, we have to get the edge on China. China is the main threat. »

Mr. Schmidt’s views on the benefits and challenges of this technology align with reactions from others in the industry.

In June 2024, OpenAI and Google employees signed a letter warning of the “serious risks” posed by AI and calling for greater government oversight of the field.

Elon Musk has issued similar warnings, saying Google is seeking to create a “digital God” through its DeepMind AI program.

In August, these concerns intensified after the discovery of an AI capable of acting autonomously to avoid being decommissioned, sparking fears that humanity was already losing control of its creation due to government inaction.

Support Epoch Times from 1€

How can you help us stay informed?

Epoch Times is a free and independent media outlet, receiving no public support and not belonging to any political party or financial group. Since our creation, we have faced unfair attacks to silence our information, particularly on human rights issues in China. This is why we are counting on your support to defend our independent journalism and to continue, thanks to you, to make the truth known.

-

-

PREV Why don’t Saturn’s rings tarnish?
NEXT At 44% off, this mini 10,000 mAh external battery is the perfect gift to put under the Christmas tree