The hidden face of OpenAI, caught up in multiple controversies

The hidden face of OpenAI, caught up in multiple controversies
The hidden face of OpenAI, caught up in multiple controversies

Six months ago, by formalizing the abrupt and unexpected ousting of Sam Altman, reinstated to his post barely five days later, the OpenAI board of directors criticized the manager for not having been “always honest”. His most fervent critics then accused him of being “manipulative and deceitful”, according to comments reported by the New Yorker. And also of not doing enough, despite the facade, to seek to understand and limit the risks linked to the exponential progress of artificial intelligence. History now seems to prove them right. In recent weeks, several controversies have shed light on the hidden side of the designer of ChatGPT and his charismatic boss, now free of any discordant voice at the top of the start-up.

Team disbanded – Gone in particular is Ilya Sutskever, the scientific director who embodied a more cautious approach to the development of AI. He resigned in mid-May, just a few days after the presentation of a new model, called GPT-4o, with increased functionality. In the process, the “super alignment” team, which he co-led, was dismantled by OpenAI. This was launched in the summer of 2023 to ensure that the technology remains under control, with the promise of being able to use 20% of the computing power held by the start-up. A promise that was not kept. “It was becoming more and more difficult to conduct our research,” thus assures Jan Leike, the second boss of the team, who has also resigned. And to affirm that “safety has been relegated to the background in favor of new products”.

“Non-disparagement” – At least three other security officials have left the company. Faced with this wave of departures, Sam Altman promised to do more in the field. A double discourse reminiscent of that on regulation. In his quest towards a general AI, capable of learning and carrying out any task, the boss of OpenAI no longer seems to want to bother with protest speeches. A desire also illustrated by the information on the site Vox, which revealed that the company made its former employees sign “non-disparagement” clauses, prohibiting them from any criticism under penalty of losing or not being able to resell the free shares that were part of their compensation. An unusual practice that Sam Altman claims to have just discovered and immediately canceled.

Imitated voice? – Another controversy: the voice used in the presentation of GPT-4o. Actress Scarlett Johansson believes she is “strangely similar” to hers. She is now threatening to take the start-up to court, indicating that she had refused to lend her voice to make the model speak. OpenAI denies having tried to imitate him, claiming to have hired an actress several months before contacting the star. An argument undermined by the message “her” posted by Sam Altman, in reference to the film of the same name in which Scarlett Johansson plays an AI. The start-up’s precedents do not speak in its favor. She allegedly used two databases, since deleted, containing hundreds of thousands of books to train her GPT model. Or even YouTube videos for its Sora video creation tool.

For further:
– OpenAI abandons development of its latest language model
– OpenAI’s double discourse on the regulation of AI


-

-

NEXT Valady. Jean Couet-Guichot and Gaya Wisniewski, two artists in residence within the region