OpenAI announces new tools to detect AI-generated content

OpenAI announced the development of new tools to detect content generated with artificial intelligence. To achieve this goal, the company led by Sam Altman joined the management committee of C2PA (Coalition for Content Provenance and Authenticity) – a group created by Adobe, Microsoft And BBC to find methods of certifying content distributed online – and has even started testing one internally new detection classifier technology which helps people detect material created with their own generative AI tools, first and foremost DE-E 3, Sora And ChatGPT.

How the OpenAI system works to recognize AI content

With the adoption of the digital content certification standard – i.e. C2PA – already widely used by several technology companies, online media and camera manufacturers, OpenAI can contribute to the development of this standard and use it to add C2PA metadata to all content generated with DALL·E 3 and ChatGPT. In the future, C2PA metadata will also mark content created with Sora, before the template. text to video be launched on a large scale.

>>>>
Credits: OpenAI.

Let’s be clear though: the C2PA standard cannot prevent attackers from generating content without metadata. However, it is also true that this information is not easy to falsify or modify. Regarding this aspect, OpenAI has indeed reported:

As adoption of the standard increases, this information can accompany content throughout its sharing, editing, and reuse lifecycle. Over time, we believe this type of metadata will become something people expect, filling a crucial gap in digital content authenticity practices.

In addition to adopting the standard in question, OpenAI is also developing new verification systems internally, including the implementation of tamper evident watermark. Using similar technologies, it is possible, for example, to mark digital content (such as audio) with an invisible signal that should be quite difficult to remove. Beyond that, the company is working on tools that use artificial intelligence itself to assess the likelihood that content is the result of generative models.

Internal tests carried out by OpenAI on its system proprietary detection classifier demonstrated great precision in distinguishing images generated with DALL-E 3 from “real” images, thus obtaining a accuracy level of approximately 98%.with an incidence of “false positives” (i.e. “real” images wrongly marked as artificial) less than 0.5%. When it comes to controlling artificially generated images with third-party AI models, there is still a lot of work to be done, given that the system has achieved a fairly poor level of accuracy, 5-10%.

>>>>
Credits: OpenAI.

Why it’s important to have tools to detect AI content

Maybe someone will ask why it is necessary to think about tools that detect content generated with AI. There are essentially two reasons: the ever-increasing amount of content generated by artificial intelligence models and the difficulty of distinguishing the latter from images, videos and texts created by “real” intelligence, i.e. human intelligence.

OpenAI itself, in its official press release, reports:

People around the world are adopting generative AI to create and edit images, videos, and audio in ways that improve creativity, productivity, and learning. As generated audiovisual content becomes more common, we believe it will be increasingly important for society as a whole to adopt new technologies and standards that help people understand the tools used to create the content they create. ‘they find online.

-

-

PREV For its Switch 2, Nintendo warns that video game development “will inevitably become longer and more complex”
NEXT These 7 abs exercises are the best to support your weight loss, according to coaches