YouTube just released a promising new beta tool that could be another game-changer for improving public conversations’ integrity. With this release, this tool expands its AI deepfake detection capabilities. The initiative targets politicians, government officials, and journalists, ensuring that their likenesses are protected against unauthorized use in manipulated content. This expansion is a significant step that showcases YouTube’s desire to uphold a credible platform as misinformation continues to spread, often faster than the truth.
And that’s a big deal, according to Amjad Hanif, YouTube’s Vice President of Creator Products, who put a premium on this new step at a recent announcement. According to Hanif, “There’s a lot of content that’s produced with AI, but that distinction’s actually not material to the content itself.” This expression of dismay highlights the need to help audiences identify what is real and what are harmful deepfakes that may misinform them.
To access this pilot program, users must first go through an identity verification process. This has been done through their selfie check in process, for which they have to upload a selfie paired with an official form of ID. This step helps to ensure that only authorized users will have access to the new tool. In doing so, we are acting to shield public figures from being manipulated and exploited.
When I spoke with YouTube’s Vice President of Government Affairs and Public Policy Leslie Miller, she stressed these broader aims, and the implications this new tool could have. She continued, “This expansion is truly just about the integrity of the public conversation.” YouTube is doing its part to build a safer online ecosystem. That’s how society enhances their ability to debate the issues of the day in a robust fashion by not being hoodwinked.
Even with the proactive approach to head off misinformation, though, Miller explained that not every match found would necessarily be removed if requested. She emphasized that over the removal, a process of careful evaluation in conjunction with context will determine which instances should be removed. This new template creates even more room between censorship and protected speech.
Hanif went into more detail about why creators should be concerned about how the AI-generated content is used. He continued, “I think more creators are starting to understand what is getting created, but the reality is, the number of removal requests is extremely low, because most of this content is pretty harmless or even contributes positively to their larger enterprise. This viewpoint sheds light on the complex and emerging relationship between creators and AI-generated content.
Sarah Perez, a reporter for the technology publication TechCrunch, has been closely tracking advances in smart technology since joining the publication in August 2011. Having past stints at ReadWriteWeb, and more importantly, extensive experience on the I.T. Her experience spans industries from banking to retail, lending a unique depth of understanding that fuels her impactful reporting.
If you’d like to connect with Sarah Perez on this subject or other Startup Alley related inquiries, you can email her at sarahp@techcrunch.com. Or you can reach her encrypted through Signal at sarahperez.01.
This screenshot illustrating YouTube’s new likeness detection tool is free for non-commercial reuse at this link .


