OpenAI, between new complaints and a licensing agreement with the Financial Times

OpenAI, between new complaints and a licensing agreement with the Financial Times
Descriptive text here

If the week began with the announcement a licensing agreement with the Financial Times allowing it to exploit its content, OpenAI was once again the target of new complaints. On Monday, the Austrian association NOYB (None Of Your Business) requested an investigation into the processing of personal data and measures to ensure compliance with the GDPR. The next day, eight American newspapers from the same group filed a complaint for copyright infringement against the start-up and Microsoft.

2023 was a year of legal battles for OpenAI and ended for it with a new complaint: that of the New York Times. The newspaper accused him of violating copyright by running models such as GPT-4 on his publications without permission, causing him to lose billions of dollars.

OpenAI defended itself from wanting to rob publishers and creators, saying it was ready to collaborate with them “so that they benefit from advanced AI technology and a new revenue model”.

The start-up sought to reach out to different media so that the answers provided by the paid versions of ChatGPT were current and produced legally. It thus signed an agreement with the German press group Axel Springer after having concluded a partnership with the Associated Press in July 2023. More recently, last March, it was with the French daily Le Monde that the first agreement was signed. signed in alongside a partnership with the Spanish press group Prisa Media.

Collaboration with the Financial Times

It was the Financial Times itself that announced a licensing agreement with OpenAI and a strategic partnership to develop new AI products and features for its readers.

It also states that it has acquired access to ChatGPT Enterprise for all its employees so that “its teams know the technology well and can benefit from the creativity and productivity gains made possible by OpenAI tools”.

The article quotes John Ridding, CEO of FT Group:

“This is an important agreement in several respects. This recognizes the value of our award-winning journalism and will give us early insight into how content is surfaced by AI. We have long been a leader in media innovation, pioneering the subscription model and engagement technologies, and this partnership will help us stay at the forefront of developments in the how people access and use information”.

The American newspapers’ complaint for copyright infringement

While some daily newspapers have said they are ready to negotiate, some refuse to let OpenAI explore and collect data from their content, such as the BBC, CNN or Reuters.

So do the New York Daily News, the Chicago Tribune, the Orlando Sentinel, the Sun Sentinel of Florida, the San Jose Mercury News, the Denver Post, the County Register and the St. Paul Pioneer Press, eight newspapers owned by Alden Global Capital. They accuse OpenAI of having illegally copied millions of their articles to train its models, without financial compensation, just like Microsoft, which allegedly did the same for Copilot.

These ongoing media disputes highlight the complex challenges the AI ​​industry faces in balancing technological innovation, copyright protection, and fair compensation for content creators.

NOYB’s complaint

A non-profit organization based in Vienna, Austria, Noyb aims to uphold the rights to privacy and protection of personal data. When a public figure was unable to have the incorrect date of her birth corrected by OpenAI, she filed a complaint with the Austrian data protection authority. It asked it to investigate OpenAI’s processing of personal data and impose a fine to ensure its future GDPR compliance.

The GDPR requires that information about individuals be accurate, that they have full access to the stored information and its Source, something OpenAI itself acknowledges cannot do. The start-up, however, reportedly declared that ““fact accuracy in large language models remains an active area of ​​research.”

Maartje de Graaf, data protection lawyer at Noyb, comments:

“Inventing false information is already problematic in itself. But when it comes to false information about people, the consequences can be serious. It is clear that businesses are currently unable to ensure that chatbots such as ChatGPT comply with EU law when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about people. Technology must follow legal requirements, not the other way around”.

-

-

PREV It is one of the Amazon successes of the moment, the Imou 2K surveillance camera is available for less than 20 euros
NEXT An exoplanet orbiting a star is reminiscent of Earth