New Standards for Digital Content Authenticity Emerge Amid AI Challenges

The Coalition for Content Provenance and Authenticity (C2PA) is churning up a lot of digital dust. It sets technical standards to ensure the trustworthiness and authenticity of digital content. This initiative couldn’t be more timely. Countless other platforms are coming to grips with AI’s insidious effect on creative IP as well. Notably, Elon Musk’s X…

Lisa Wong Avatar

By

New Standards for Digital Content Authenticity Emerge Amid AI Challenges

The Coalition for Content Provenance and Authenticity (C2PA) is churning up a lot of digital dust. It sets technical standards to ensure the trustworthiness and authenticity of digital content. This initiative couldn’t be more timely. Countless other platforms are coming to grips with AI’s insidious effect on creative IP as well. Notably, Elon Musk’s X recently announced a feature to label edited images as “manipulated media,” a move that aligns with C2PA’s guidelines on content integrity.

C2PA’s policy addresses numerous aspects of content manipulation, including “selected editing or cropping, slowing down or overdubbing, or manipulation of subtitles.” Digital literacy efforts This holistic and inclusive approach aims to develop a robust framework that promotes transparency and authenticity in the digital media landscape. Yoel Roth, former head of safety and integrity at Twitter, referenced C2PA’s policy in 2020, emphasizing the increasing need for standards in a rapidly evolving digital landscape.

As a part of its new program, X will start tagging edited images. Here’s the catch While Musk made this grand announcement, he did not detail what processes X will implement to detect AI-generated content. This announcement comes as the big tech companies, like Meta, are rolling out their own AI labeling systems. The race for cutting-edge AI tech is on baby! In 2024, Meta released a new “AI info” label. This change removed the previous “Created with AI” language to make it as clear as possible when AI is used in an image.

AI content detection systems are both proprietary and flawed by design, often generating false positive results. Meta’s struggles underscore the difficulty of reliably detecting AI material, as many other tech companies have discovered in recent months. Historically, the company has come under fire for its labeling practices. Based on stakeholder feedback and realities on the ground, it has taken significant steps to refine the program.

In addition to image labeling, the music streaming industry is taking steps to combat fraud associated with AI-generated content. Deezer plans to alleviate anti-streaming sentiments by marking AI music. At the same time, Spotify took a step in the other direction by clarifying its policies to identify tracks made with AI. These measures dissuade spam and fake news, as well as clarify the content people are taking in across the digital landscape.

Sarah, a veteran reporter for TechCrunch since August 2011, has been a front-row witness as it all unfolded. Before launching her career in journalism, she spent years in information technology for the banking, retail, and software industries. Her experience as a musician and activist gives her a valuable perspective on the intersection of technology and media.

Before coming to TechCrunch, Sarah spent over three years as ReadWriteWeb’s Sustainability Editor. While in that role, she developed her craft writing about emerging technology trends and their effects on society. She covered the shift in digital content standards. She warns about the crucial need to preserve authenticity in an age ever more permeated by AI.