DayFR Euro

“The world is not ready…” for the dangers of AI, warns a former OpenAI employee

After the departure of its co-founder, OpenAI has just lost its main advisor in charge of supervising “Super Intelligence”. The company no longer sets itself limits.

Are we gradually evolving towards a scenario once reserved for science fiction films? That’s to say a time when artificial intelligence becomes truly powerful and there is no longer any limit? If we still struggle to believe it, some experts in the field, who are still aware, see it coming. According to The VergeOpenAI, creator of ChatGPT, dissolved its team responsible for monitoring the risks of too powerful AI a few days ago. The limit being exceeded, it’s time to play!

Change of direction at OpenAI and cascade departure

Even less popular than it is today, OpenAI revealed itself to the world in 2022 by launching ChatGPT. The public sees and admires a language model capable of simulating a real discussion, like humans. As the months passed, the company rolled out several products and its value skyrocketed. Among the creatorsthe idea of ​​taking full advantage of this enormous craze for AI is starting to germinate. But if some only saw the money, others still wanted to play with ethics. That being said, OpenAI is forming a team in July 2023 specializing in studying the risks of artificial intelligence.

Launching increasingly used products, a change of direction is beginning to take shape within the company which had set itself the mission of develop and promote general artificial intelligence that is “safe and beneficial to all humanity”. Until now a non-profit organization, OpenAI no longer wants to set itself limits, as competition is becoming tough. A change of direction which led to the departure, a few months ago, ofIlya Sutskever, its co-founder and Jean Leekehead of Superalignment. The latter gave the following reason: “Safety culture and processes have taken a back seat to shiny products.”

SEE ALSO: An AI chatbot is accused of pushing a child to suicide: how is this possible and what does it tell us?

When OpenAI sacrifices security on the altar of innovation

On October 23, 2023, Miles Brundage, senior advisor in charge of supervising “Super Intelligence”, announced his departure from the organization. And for good reason, OpenAI no longer wants to take a step back from the dangers that AI could represent. This is notably the reason why it chose to dissolve AGI Readinesswho advised her on the risks of too powerful artificial intelligence.

For Miles Brundage, “AI is unlikely to be as safe and beneficial as possible without a concerted effort to make it happen. ». This is why he wants to join or create a non-profit organization that works for AI policy research. According to him, independent voices, without any conflict of interest, could encourage reflection on the subject. Because, between technological progress and social responsibility, the line is thin. It is up to players in the sector to navigate between these two poles by demonstrating integrity rather than giving in to the sirens of blind progress.

BuzzWebzine is an independent media. Support us by adding us to your favorites on Google News :

-

Related News :