YouTube on Wednesday announced an expansion of its pilot program designed to identify and manage AI-generated content that features the “likeness,” including the face, of creators, artists, and other famous or influential figures. The company is also publicly declaring its support for the legislation known as the NO FAKES ACT, which aims to tackle the problem of AI-generated replicas that simulate someone’s image or voice to mislead others and create harmful content.
The company says it collaborated on the bill with its sponsors, Sens. Chris Coons (D-DE) and Marsha Blackburn (R-TN), and other industry players, including the Recording Industry Association of America (RIAA) and the Motion Picture Association (MPA). Coons and Blackburn will be announcing the reintroduction of the legislation at a press conference on Wednesday.
In a blog post, YouTube explains the reasoning behind its continued support, saying that while it understands the potential for AI to “revolutionize creative expression,” it also comes with a downside.
“We also know there are risks with AI-generated content, including the potential for misuse or to create harmful content. Platforms have a responsibility to address these challenges proactively,” according to the post.
“The NO FAKES Act provides a smart path forward because it focuses on the best way to balance
protection with innovation: putting power directly in the hands of individuals to notify platforms of
AI-generated likenesses they believe should come down. This notification process is critical because it makes it possible for platforms to distinguish between authorized content from harmful fakes — without it,
platforms simply can’t make informed decisions,” YouTube says.
The company introduced its likeness detection system in partnership with the Creative Artists Agency (CAA) in December 2024.
The new technology builds on YouTube’s efforts with its existing Content ID system, which detects copyright-protected material in users’ uploaded videos. Similar to Content ID, the program works to automatically detect violating content — in this case, simulated faces or voices that were made with AI tools, YouTube explained earlier this year.
For the first time, YouTube is also sharing a list of the program’s initial pilot testers. These include top YouTube creators like MrBeast, Mark Rober, Doctor Mike, the Flow Podcast, Marques Brownlee, and Estude Matemática.
During the testing period, YouTube will work with the creators to scale the technology and refine its controls. The program will expand to reach more creators over the year ahead, the company also said. However, YouTube didn’t say when it expects the likeness detection system to launch more publicly.
In addition to the likeness detection technology pilot, the company also previously updated its privacy process to allow people to request the removal of altered or synthetic content that simulates their likeness. It also added likeness management tools that let people detect and manage how AI is used to depict them on YouTube.
#YouTube #expands #likeness #detection #technology #detects #fakes #handful #top #creators