YouTube recently alerted creators that they’re rolling out an expansion of their artificial intelligence deepfake detection capabilities, with a special focus on political figures, government officials, and journalists. This exciting initiative seeks to strengthen the quality of public discourse at a time when mistrust is fueling the spread of misinformation and other forms of manufactured media. As Amjad Hanif, Vice President of Creator Products, noted, that makes this new tool more important than ever. In an era where genuine content is key, this tool is indispensable.
As part of the new system, eligible pilot testers must prove their identity by uploading a selfie next to a government-issued ID. This review process helps to verify that the people using the technology are legitimate creators or public figures. Hanif explained the rationale behind the placement of labels related to detected deepfakes, stating, “There’s a lot of content that’s produced with AI, but that distinction’s actually not material to the content itself.”
This undertaking is a direct reaction to widespread anxieties around AI-generated material. Unfortunately, rapid development of this technology has led to the misuse of alarms and inequitable application. According to Leslie Miller, YouTube’s Vice President of Government Affairs and Public Policy, this expansion is hugely significant. Its mission is to defend the integrity of public discourse.
“This expansion is really about the integrity of the public conversation,” – Leslie Miller, YouTube’s Vice President of Government Affairs and Public Policy
YouTube’s actions are part of the larger trend of technology platforms responding to the threats that deepfakes and disinformation campaigns have created. Increasingly, the ways that online content can warp public perception have come under scrutiny. It’s high time for platforms such as YouTube to implement meaningful protections to deter and prevent these practices.
Amjad Hanif went into detail about the platform’s new policy on content removal requests linked to identified deepfakes. He noted that while users may request the removal of certain content, “the volume of actually removal requests is really, really low because most of it turns out to be fairly benign or additive to their overall business.” This observation highlights the challenges of content moderation in an ocyberfriendlyth creator ecosystem.
Sarah Perez has been following this story since August 2011. She’s a veteran technology reporter for TechCrunch and has seen a lot of these fads come and go in the process. Prior to that, she spent more than three years at ReadWriteWeb. Along with her wealth of experience in the IT sector, she has experience across numerous industries such as banking, retail and software. If you would like additional details, don’t hesitate to contact her directly. You can email her at sarahp@techcrunch.com or signal her with an encrypted message at sarahperez.01.
The ramifications of YouTube’s new and improved enhanced deepfake detection are wide-ranging. Fake news and disinformation are overwhelming the web on a daily basis. It’s more important than ever for platforms to proactively work to keep their users safe and foster healthy democratic discourse. How creators choose to engage with these new systems will be critical to their success and effectiveness.
A graphic depicting YouTube’s commitment and ongoing work is viewable online here, courtesy of YouTube.

