New adventure for Ilya Sutskever with the launch of “Safe Superintelligence Inc”

New adventure for Ilya Sutskever with the launch of “Safe Superintelligence Inc”
New adventure for Ilya Sutskever with the launch of “Safe Superintelligence Inc”

It was on X that Ilya Sutskever, one of the co-founders of OpenAI, announced the launch of “Safe Superintelligence Inc” just as he did to announce his departure from OpenAI a month ago. He said to himself “very excited about what comes next – a project which is very meaningful to me and which I will share details of in due course.” This project was therefore to create a start-up dedicated to the development of a secure AGI, which he has just done with Daniel Gross and Daniel Levy.

Ilya Sutskever served as chief scientist at OpenAI and co-led with Jan Leike the superalignment team established by the start-up in July 2023 to minimize the potential risks of AI surpassing human intelligence.

Both believe that artificial general intelligence could be developed by the end of this decade, but warned:

“Superintelligence will be the most impactful technology humanity has ever invented and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous and could lead to humanity’s loss of power or even human extinction.

Ilya Sutskever was among four board members who ousted Sam Altman as CEO last November. Although both were politically correct in their comments regarding Ilya Sutskever’s resignation, it was clear that the two men no longer had the same vision for the future of OpenAI. Jan Leike, who left the start-up immediately, provided his insight:

“Building machines smarter than humans is an inherently dangerous business… But in recent years, safety culture and processes have taken a back seat to shiny products.”

OpenAI also dissolved the superalignment team after the departure of the two co-directors.

Develop secure general artificial intelligence

OpenAI, which aims to develop an AI that would be able to adapt and excel in a wide range of fields, in a manner similar to, or even superior to, human intelligence, in other words an AGI, finds itself today facing a serious competitor: Safe Superintelligence Inc or SSI.

The three co-founders declare in their press release on X:

“Superintelligence is within reach. Building secure superintelligence (SSI) is the most important technical problem of our time. We launched the world’s first SSI laboratory, with one goal and one product: safe superintelligence. It’s called Safe Superintelligence Inc. SSI is our mission, our name and our entire product roadmap, because it is our sole purpose. Our team, our investors and our business model are all aligned to achieve SSI”.

Adding:

“We plan to advance capabilities as quickly as possible while ensuring our safety always remains a priority.”

The American start-up SSI has offices in Palo Alto and Tel Aviv, “where we have deep roots and the ability to recruit the best technical talent”, specifies the founding team. In addition to Ilya Sutskever, it is made up of Daniel Levy, an AI researcher and engineer who also worked at OpenAI and Daniel Gross, an entrepreneur and investor of Israeli origin, co-founder of Cue, a company acquired by Apple, where he then worked on AI-related projects, and former partner of the start-up accelerator Y Combinator.

SSI is recruiting:

“We are assembling a small, cracked team of the world’s best engineers and researchers dedicated to focusing on SSI and nothing else.”

-

-

NEXT The Echo Pop smart speaker is at a price rarely seen before