YouTube Expands AI Generated Content Detection Pilot
Individuals can now request removal of AI-generated content that mimics their face or voice.

YouTube has announced an expansion of its pilot programme aimed at detecting and managing AI-generated content that uses a person’s likeness — including their face or voice — without consent. This includes content featuring creators, artists, and other public figures.
The company also voiced its support for the NO FAKES Act, a proposed law focused on curbing the misuse of AI to create misleading or harmful digital replicas of individuals.
"For nearly two decades, YouTube has been at the forefront of handling rights management at scale, and we understand the importance of collaborating with partners to tackle these issues proactively. Now, we're applying that expertise and dedication to partnership to ensure the responsible deployment of innovative AI tools," Leslie Miller, VP of Public Policy, YouTube, said in a blog post.
YouTube further said they are doing the following:
- New tools & policies: Individuals can now request removal of AI-generated content that mimics their face or voice.
- Likeness management: We’ve launched tools and a pilot with top creators to help manage AI depictions on YouTube.
- Legislation support: We back smart laws like the NO FAKES Act, giving people the power to flag harmful AI fakes.