How Google is tackling C2PA implementation for AI-generated content

How Google is tackling C2PA implementation for AI-generated content
How Google is tackling C2PA implementation for AI-generated content

Google is taking a step forward towards transparency. The firm is tackling AI-generated content and explains how it plans to implement the “Coalition for Content Provenance and Authenticity” (C2PA), which brings together technology players behind an invisible marker to identify the provenance of content.

As a reminder, this initiative launched by Adobe, Arm, the BBC, Intel, Microsoft and Truepic in 2021 has made it possible to develop various standards, including the common Content Credentials standard. Last February, Meta, OpenAI and Google announced that they were joining the C2PA. Today, the Mountain View firm is lifting the veil on the progress made in this area.

A second version of the Content Credentials standard is ready to use

Google has notably been working on a second version of the Content Credentials standard, called “more secure against a wider range of tampering attacks due to stricter technical requirements to validate content provenance history”The aim is to ensure that the attached data is not altered or misleading during attempted attacks.

Initially, the standard proposes to integrate an invisible marker into content in the form of metadata. For an image or video, this could be the name of the author, its creation date, the credibility of the source, but also the way in which it was created (whether manually or using a tool using artificial intelligence techniques).

Integration into various Google services

Over the next few months, Google plans to integrate this V2 of Content Credentials into a few of its key products, including Search and Ads. Specifically, if an image contains C2PA metadata, users will be able to use the “About this image” feature to see if it was created or edited with AI tools. The feature provides users with context about the images they see online and is available in Google Images, , and Circle to Search.

The Mountain View giant is also exploring ways to relay C2PA information to users on YouTube when content is captured with a camera. For example, if the data shows that an image was taken by a specific camera model, the C2PA-enacted trust list helps validate the accuracy of that information.

In parallel, Google plans to continue integrating SynthID (the integrated “digital tattoo” created by DeepMind teams) with other latest-generation AI tools for content creation, including text, audio, image and video. In addition, the firm assures that it continues to make progress on the subject as part of the voluntary – and non-binding – commitments made at the White House last year.

Yes to transparency, but only for others

By comparison, other companies that have previously embarked on this path have been reluctant to reveal transparency-related tools. The latest example is OpenAI: the start-up has developed a “digital watermarking” technique for the text generated by its GPT models. Very reliable, it has reportedly been ready for a year, but the company has preferred not to make it available because it can be easily circumvented.

Its existence was revealed on August 4 by the Wall Street Journalthen quickly confirmed by OpenAI. The company specifies that its approach in this matter is cautious and considered because the situation is “complex” and “its impact on the ecosystem would go beyond OpenAI”. Self-regulation does not really seem to be on the agenda for the startup, especially after having exceeded one million paying users.

Selected for you

-

-

PREV 700 times more powerful than hydrogen engine: Toyota invented a completely crazy air engine 10 years ago
NEXT How Video Games Are Emptying Your Kids’ Wallets (and Yours)