Celebrities will be able to find and request removal of AI deepfakes on YouTube
Mia Sato
is features writer with five years of experience covering the companies that shape technology and the people who use their tools.
YouTube is expanding its AI deepfake monitoring feature to Hollywood — meaning some celebrity AI videos could soon disappear.
The platform’s likeness detection feature searches YouTube for AI deepfake content and flags it for public figures enrolled in the program. Public figures can use it to keep track of AI content on YouTube of themselves or request removal (takedowns are evaluated against YouTube’s privacy policy, and not every request will be approved). YouTube began testing the feature with content creators last fall; in March, the company expanded the program to politicians and journalists. YouTube says the tool will cover celebrities regardless of whether they have a YouTube account.
The system requires participants to submit an ID and a selfie video of themselves. (Likeness detection is focused on faces specifically, as opposed to a voice or other identifying characteristics.) Removal of deepfakes isn’t guaranteed, and there are protected use cases like parody or satire. YouTube has previously said that when content creators used the feature, they requested only a “very small” number of videos of themselves be removed.
YouTube has compared likeness detection to Content ID, its system for finding (and removing) copyrighted material on the platform. The difference is that with Content ID, rights holders can opt to monetize other users’ videos that use their material and split the revenue. That’s not yet possible with likeness detection, but that clearly seems like the direction the industry is moving toward.
Earlier this month, YouTube announced a feature allowing creators to digitally clone their likeness using AI, which could then be inserted into videos. Talent agency CAA (which YouTube says supported the likeness detection expansion) has a database filled with clients’ biometric data that entertainers can retain — or deploy for commercial opportunities. TikTok star Khaby Lame effectively sold off the rights to his likeness, which would then be used to sell products online. (The deal has run into several road bumps and it’s not clear if it has closed, according to Business Insider.)
In an interview with The Hollywood Reporter, some talent managers frame the explosion of AI deepfakes as a way for the entertainment industry to engage with fans. Some celebrities might want AI content of themselves to be pulled when eligible; others might let fan-made AI content proliferate. And in the future, entertainers might welcome AI deepfakes of themselves — as long as they get paid.
Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.
- Mia Sato
Related Articles
AI backlash is coming for elections
Ask Americans how they feel about AI and most say they have concerns. Communities have mounted resistance to data center projects, stalling them across the US. On social media, anger at AI companies and...
OpenAI’s updated image generator can now pull information from the web
OpenAI is rolling out the latest version of its AI-powered image generator with new “thinking capabilities,” allowing it to search the web to help it create multiple images from a single prompt. On Tuesday,...
Framework’s first eGPUs turn its laptop into a desktop PC
Remember when Framework made the first laptop where you can easily upgrade its entire internal video card in three minutes flat? The company’s getting into the external graphics game, too. As promised last...
Ordering with the Starbucks ChatGPT app was a true coffee nightmare
Venti iced coffee, light skim milk. That’s what I get at Starbucks. It is what I have gotten at Starbucks every time I’ve been to Starbucks for as long as I can remember, other than a brief love affair with...
