Skip to Content
Policy

Cryptography may offer a solution to the massive AI-labeling problem 

An internet protocol called C2PA adds a “nutrition label” to images, video, and audio.

A robotic hand holding a blank name tag sticker
Sarah Rogers/MITTR | Getty Images

The White House wants big AI companies to disclose when content has been created using artificial intelligence, and very soon the EU will require some tech platforms to label their AI-generated images, audio, and video with “prominent markings” disclosing their synthetic origins. 

There’s a big problem, though: identifying material that was created by artificial intelligence is a massive technical challenge. The best options currently available—detection tools powered by AI, and watermarking—are inconsistent, impermanent, and sometimes inaccurate. (In fact, just this week OpenAI shuttered its own AI-detecting tool because of high error rates.)

But another approach has been attracting attention lately: C2PA. Launched two years ago, it’s an open-source internet protocol that relies on cryptography to encode details about the origins of a piece of content, or what technologists refer to as “provenance” information. 

The developers of C2PA often compare the protocol to a nutrition label, but one that says where content came from and who—or what—created it. 

The project, part of the nonprofit Joint Development Foundation, was started by Adobe, Arm, Intel, Microsoft, and Truepic, which formed the Coalition for Content Provenance and Authenticity (from which C2PA gets its name). Over 1,500 companies are now involved in the project through the closely affiliated open-source community, Content Authenticity Initiative (CAI), including ones as varied and prominent as Nikon, the BBC, and Sony.

Recently, as interest in AI detection and regulation has intensified, the project has been gaining steam; Andrew Jenks, the chair of C2PA, says that membership has increased 56% in the past six months. The major media platform Shutterstock has joined as a member and announced its intention to use the protocol to label all its AI-generated content, including its DALL-E-powered AI image generator. 

Sejal Amin, chief technology officer at Shutterstock, told MIT Technology Review in an email that the company is protecting artists and users by “supporting the development of systems and infrastructure that create greater transparency to easily identify what is an artist’s creation versus AI-generated or modified art.”

What is C2PA and how is it being used?

Microsoft, Intel, Adobe, and other major tech companies started working on C2PA in February 2021, hoping to create a universal internet protocol that would allow content creators to opt in to labeling their visual and audio content with information about where it came from. (At least for the moment, this does not apply to text-based posts.) 

Crucially, the project is designed to be adaptable and functional across the internet, and the base computer code is accessible and free to anyone. 

Truepic, which sells content verification products, has demonstrated how the protocol works with a deepfake video with Revel.ai. When a viewer hovers over a little icon at the top right corner of the screen, a box of information about the video appears that includes the disclosure that it “contains AI-generated content.” 

Adobe has also already integrated C2PA, which it calls content credentials, into several of its products, including Photoshop and Adobe Firefly. “We think it’s a value-add that may attract more customers to Adobe tools,” Andy Parsons, senior director of the Content Authenticity Initiative at Adobe and a leader of the C2PA project, says. 

C2PA is secured through cryptography, which relies on a series of codes and keys to protect information from being tampered with and to record where information came from. More specifically, it works by encoding provenance information through a set of hashes that cryptographically bind to each pixel, says Jenks, who also leads Microsoft’s work on C2PA. 

C2PA offers some critical benefits over AI detection systems, which use AI to spot AI-generated content and can in turn learn to get better at evading detection. It’s also a more standardized and, in some instances, more easily viewable system than watermarking, the other prominent technique used to identify AI-generated content. The protocol can work alongside watermarking and AI detection tools as well, says Jenks. 

The value of provenance information 

Adding provenance information to media to combat misinformation is not a new idea, and early research seems to show that it could be promising: one project from a master’s student at the University of Oxford, for example, found evidence that users were less susceptible to misinformation when they had access to provenance information about content. Indeed, in OpenAI’s update about its AI detection tool, the company said it was focusing on other “provenance techniques” to meet disclosure requirements.

That said, provenance information is far from a fix-all solution. C2PA is not legally binding, and without required internet-wide adoption of the standard, unlabeled AI-generated content will exist, says Siwei Lyu, a director of the Center for Information Integrity and professor at the University at Buffalo in New York. “The lack of over-board binding power makes intrinsic loopholes in this effort,” he says, though he emphasizes that the project is nevertheless important.

What’s more, since C2PA relies on creators to opt in, the protocol doesn’t really address the problem of bad actors using AI-generated content. And it’s not yet clear just how helpful the provision of metadata will be when it comes to media fluency of the public. Provenance labels do not necessarily mention whether the content is true or accurate. 

Ultimately, the coalition’s most significant challenge may be encouraging widespread adoption across the internet ecosystem, especially by social media platforms. The protocol is designed so that a photo, for example, would have provenance information encoded from the time a camera captured it to when it found its way onto social media. But if the social media platform doesn’t use the protocol, it won’t display the photo’s provenance data.

The major social media platforms have not yet adopted C2PA. Twitter had signed on to the project but dropped out after Elon Musk took over. (Twitter also stopped participating in other volunteer-based projects focused on curbing misinformation.)  

C2PA “[is] not a panacea, it doesn’t solve all of our misinformation problems, but it does put a foundation in place for a shared objective reality,” says Parsons. “Just like the nutrition label metaphor, you don’t have to look at the nutrition label before you buy the sugary cereal.

“And you don’t have to know where something came from before you share it on Meta, but you can. We think the ability to do that is critical given the astonishing abilities of generative media.”

This piece has been updated to clarify the relationship between C2PA and CAI.

Deep Dive

Policy

Eric Schmidt has a 6-point plan for fighting election misinformation

The former Google CEO hopes that companies, Congress, and regulators will take his advice on board—before it’s too late.

A high school’s deepfake porn scandal is pushing US lawmakers into action

Legislators are responding quickly after teens used AI to create nonconsensual sexually explicit images.

Meta is giving researchers more access to Facebook and Instagram data

There’s still so much we don’t know about social media’s impact. But Meta president of global affairs Nick Clegg tells MIT Technology Review that he hopes new tools the company just released will start to change that.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.