YouTube is expanding its AI deepfake detection tool to all adult users
Mia Sato
is features writer with five years of experience covering the companies that shape technology and the people who use their tools.
YouTube is expanding its AI likeness detection program to all users over the age of 18 — meaning just about anyone can have the platform hunt for potential deepfakes of themselves.
The likeness detection feature uses a selfie-style scan of a person’s face to monitor YouTube for lookalikes. If there is a match, YouTube alerts the user; the person then has the option to request that YouTube remove the content. YouTube has said in the past that it has found the number of removal requests to be “very small.”
YouTube began testing the feature with content creators, and then expanded it to government officials, politicians, journalists, and finally the entertainment industry. The expansion to any user 18 years or older is a significant shift — it essentially gives the average person the ability to constantly monitor content on YouTube that could use their likeness. Takedown requests are evaluated using YouTube’s privacy policy, and the company says it considers removals based on criteria like whether the content is realistic, is labeled as AI-generated, and if a person can be uniquely identified. There are carveouts for things like parody or satire, and the tool only covers facial likeness, not other identifying features like a person’s voice. Users can withdraw from the program and have YouTube delete their data.
The news was announced on YouTube’s creator forum, but spokesperson Jack Malon says there are no requirements on what constitutes a “creator” who is eligible.
“With this expansion, we’re making clear that whether creators have been uploading to YouTube for a decade or are just starting, they’ll have access to the same level of protection,” Malon said in an email.
Deepfake content often centers on celebrities, politicians, or other public figures, but the ability to create a convincing digital replica is a concern for private citizens, too. There have been instances of teenagers being deepfaked by classmates, and three teenagers sued xAI alleging that the company’s Grok chatbot generated child sexual abuse material (CSAM) of them.
Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.
- Mia Sato
Related Articles
Sony tries to explain that its AI Camera Assistant doesn’t suck
Terrence O'Brien is the Verge’s weekend editor. He has over 18 years of experience, including 10 years as managing editor at Engadget.After Sony drew some unwanted attention for a post demonstrating its AI...
不用再找了,AI落地最全的实战打法,都在亦庄这场大会里
< img id="wx_img" src="https://www.qbitai.com/wp-content/uploads/imgs/qbitai-logo-1.png" width="400" height="400"> 2026-05-16 ...
奥特曼投的芯片涨疯了,今年最大科技IPO
< img id="wx_img" src="https://www.qbitai.com/wp-content/uploads/imgs/qbitai-logo-1.png" width="400" height="400"> 2026-05-16 ...
虾马之后又火一个!OpenHuman用20分钟了解你的一切,存成卡帕西式知识库
< img id="wx_img" src="https://www.qbitai.com/wp-content/uploads/imgs/qbitai-logo-1.png" width="400" height="400"> 2026-05-16 ...
