Infosecurity Europe
3-5 June 2025
ExCeL London

Three Explorative Ways to Label AI-Generated Content

Deepfakes have become a growing concern, with their potential to spread misinformation and create havoc online.

Various organisations and individuals are working to develop tools and techniques to identify and label AI-generated content.

This article explores three approaches to labelling AI-generated content, offering a much-needed transparency toolkit. 

 

How AI-generated Content Can be Labelled 

C2PA, the Industry Consortium

The Coalition for Content Provenance and Authenticity (C2PA) is a project of the Joint Development Foundation, a Washington-based 501c6 non-profit, that brings together the efforts of the Content Authenticity Initiative (CAI) and Project Origin.

Founded in late 2019 by Adobe in collaboration with the New York Times and Twitter (now X), the CAI is building a system to provide provenance and history for digital media.

Creators would use this system to claim authorship while empowering consumers to make informed decisions about what to trust.

Project Origin, founded in 2019 by the BBC, CBC-Radio Canada, Microsoft and the New York Times, focuses on tackling disinformation in digital news by defining an end-to-end process for publishing, distribution and attaching signals to content to demonstrate its integrity.

C2PA was founded in February 2021 by Microsoft and Adobe in collaboration with Arm, the BBC, Intel and Truepic.

By combining the two initiatives, C2PA is working to develop and promote an open, royalty-free technical standard for content provenance.

This standard will allow creators to embed metadata into their content, providing information about its origin, history and modifications. Platforms and consumers can then use this metadata to verify the authenticity of content and identify potential manipulations. 

Its Steering Committee members include the founding members as well as Google, Intel, OpenAI, Truepic, Sony, the BBC and Publicis.

Other members include a variety of organisations, including social media (TikTok), news media (The New York Times, Financial Times, CBC-Radio Canada, France TV), content creation and technology companies (Shutterstock, Getty Images), and NGOs (Witness), among others.



Google, Meta, OpenAI: Big Tech’s Solo Initiatives

Several technology companies are working on their own technical methods to label AI-generated or synthetic content on their platforms.

Google’s SynthID is probably the tool that has received the most publicity. The tool, developed by Google DeepMind, uses several deep learning algorithms to watermark and identify AI-generated content.

SynthID can currently identify AI-generated images, audio clips, text and videos. The tool is restricted to Google models.

Meta is also pursuing several initiatives to watermark and label AI-generated content across its platforms. These include Stable Signature, a method for watermarking images created by open-source generative AI, and AudioSeal, a system embedding watermarks that could help detect AI-generated content online.

Nick Clegg, Meta’s president of global affairs, also announced in February 2024 that the group will introduce a cross-platform labelling system for AI-generated images before the US election.

OpenAI has reportedly developed its own AI watermarking system, but the company’s leadership has decided against releasing it for the time being.

One primary concern is that the tool might turn ChatGPT users away from the product.

Digimarc, Truepic, Sensity AI: Startups’ Initiatives

With the rise of misinformation concerns, several startups are dedicated to developing AI watermarking tools.

Digimarc is a long-standing provider of digital identifiers, such as watermarks, for industrial use cases.

In a 2020 white paper, the company outlined a proposed system for mitigating the problem of deepfake videos with digital watermarks.

Truepic was founded in 2015 to solve the growing problem of misinformation and deepfakes online with a solution ensuring the integrity of digital content.  

Sensity AI, a startup founded in 2018, offers various services, including ID document verification, face recognition, liveness detection, fraudulent document detection and deepfake detection.

Its Deepfake Detection Platform uses machine learning to detect deepfakes in videos and images.

Truepic and Digimarc are both members of C2PA.

Read more: How Hackers are Targeting Large Language Models


ADVERTISEMENT


Conclusion

As deepfakes continue to pose a significant threat to online integrity, the development of effective labelling tools is vital.

These three approaches can go some way to foster a more transparent and trustworthy digital landscape.

These tools aid individuals and organisations to recognise genuine content from AI-generated creations, thereby mitigating the potential harm caused by deepfake misinformation.


Enjoyed this article? Make sure to share it!



Looking for something else?


Tags


ADVERTISEMENT


ADVERTISEMENT