*Ian Drury
I. Introduction — “The New Deepfake Dilemma”
On September 30, 2025, OpenAI introduced its new Sora 2 video generation model.[1] Sora 2 is a generative AI video model that enables users to input text or an image to generate a realistic video with accompanying audio.[2] Less than a month later, OpenAI announced a partnership with Brian Cranston and SAG-AFTRA to combat fake videos (“deepfakes”) using Cranston’s likeness and strengthen internal tools around unapproved AI-generated videos.[3]
These developments expose a mismatch between how synthetic media is produced and how it is regulated.[4] Deepfakes routinely evade copyright enforcement because they often do not copy-protected expression at all.[5] In response, YouTube has introduced a “likeness detection” tool to supplement its Content ID system, signaling a shift in platform governance toward identity-based protection.[6] While deepfakes have been identified as an issue for some time, the introduction of Sora 2 and the subsequent statement featuring Brian Cranston mark an escalation in the ease—and therefore the proliferation—of deepfakes that can “spread false information and easily manipulate public opinion.”[7] The problem is that deepfakes sidestep existing copyright protections on content platforms and blur the boundary between creative expression and deception.[8] The ease of creating fake videos only increases the challenges creators and public figures face in protecting their image and voice from unauthorized AI-generated content.[9]
YouTube’s likeness detection initiative may signal an escalation in the AI arms race, in which tech companies seek to build greater public trust and thus lessen public pressure for AI regulation.[10] Google and other tech companies have assumed a quasi-regulatory role in overseeing AI-generated content.[11] Still, it remains uncertain whether the implementation of such tools aligns with the existing regulatory framework surrounding privacy, likeness, and free expression.[12]
II. Background — From Content ID to Content Identity
Deepfakes present a new challenge to platforms trying to moderate copyright issues on their platforms because a deepfake video is not a duplicate of copyrighted work, but rather a new composition utilizing another’s likeness.[13] This slight difference in how likeness detection works allows Google to combat the proliferation of deepfake content in a way that Content ID does not.[14] As concerns about rights related to likeness and the harms caused by deepfakes continue to rise,[15] legislatures have enacted laws criminalizing certain uses of deepfakes.[16] These laws create an incentive for content platforms, such as YouTube, to increase moderation to mitigate violations that harm the image of their revenue-generating content creators.[17]
YouTube’s Content ID system automatically scans all uploaded videos and compares them to a database of content submitted by copyright holders.[18] If the system identifies a match, YouTube either blocks the videos from public view, demonetizes them, or shares statistics with the rightsholder.[19] Content ID is an appeasement to the Digital Millennium Copyright Act’s (“DMCA”) copyrighted material handling requirements of content platforms such as YouTube.[20] Content ID adds an intermediary step between uploading a video containing copyrighted material and receiving a DMCA violation notice.[21] Content ID’s effectiveness is limited because it relies solely on existing content for comparison with new uploads.[22]
As AI-generated deepfakes proliferated and likeness-related rights legislation emerged, YouTube introduced a likeness detection tool to supplement its Content ID system.[23] Likeness detection works by having eligible creators upload a facial scan and a government-issued ID, which serve as references for the system.[24] Using that information, the likeness detection tool will scan all newly uploaded videos on the platform for any use of that likeness and flag those videos for review by the original creator.[25] The creator is then “empowered to submit a removal request” to protect their image.[26]
III. The Legal Gray Zone — Platform Policing vs. User Rights
YouTube’s likeness detection raises legal issues concerning the intersection of likeness rights and First Amendment–protected expression.[27] A system such as likeness detection is susceptible to errors, including false positives.[28] If a platform allows the suppression of a video based on a false positive, then it forgoes the broad immunity that §230 of the Communications Act of 1934 affords it.[29] This means that suppressing protected speech through likeness detection could expose YouTube and other content platforms to liability that §230 would otherwise shield.[30] By exercising editorial control through likeness detection, the platforms risk being treated more like traditional publishers rather than neutral platforms.[31] Additionally, when creators anticipate that their content may be wrongly flagged or removed, they may refrain from posting lawful material.[32] This dynamic would create a chilling effect on the platform by discouraging posts that may run afoul of the automated moderators, even without direct government involvement.[33]
Legal recourse for creators would be limited as §230, and the DMCA’s safe-harbor provisions broadly immunize platforms from good-faith moderation decisions, even when those removals suppress free speech.[34] Because platforms are private entities, creators generally lack standing to bring First Amendment claims.[35] However, because platforms such as YouTube functionally operate as a “modern public square,”[36] scholars have suggested that these platforms should be scrutinized when they restrict or block certain content.[37]
IV. Conclusion
YouTube’s likeness detection marks a new phase in the governance of AI-generated media.[38] As platforms assume greater responsibility for moderating synthetic content, they also inherit complex questions about liability, privacy, and free expression.[39] By analyzing likeness detection in the context of the shift to synthetic content moderation, this issue shows unresolved tensions among intellectual property, likeness-related rights, and free expression. Whether courts and policymakers permit this self-regulation to stand or impose new duties on digital gatekeepers will define the evolving landscape of platform accountability.
*Ian Drury is a second-year student at the University of Baltimore School of Law where he is a Staff Editor for Law Review and a member of the Royal Graham Shannonhouse III Honor Society. Prior to law school, Ian earned a Bachelor of Science in Political Science from the Illinois Institute of Technology in Chicago, IL and a Master of Business Administration from Johns Hopkins Carey Business School, Baltimore. Next summer he will be working as a Law Clerk for Silverman Thompson.
[1] Sora 2 is Here, OpenAI (Sep. 30, 2025), https://openai.com/index/sora-2/.
[2] Laura Ramsay, Sora 2 Lands on Artlist, ARTLIST: Blog (Jan. 29, 2026), https://artlist.io/blog/sora-2-ai-announcement/.
[3] Jaures Yip, OpenAI Cracks Down on Sora 2 Deepfakes After Pressure from Bryan Cranston,SAG-AFTRA, CNBC (Oct. 20, 2025, at 15:24 ET), https://www.cnbc.com/2025/10/20/open-ai-sora-bryan-cranston-sag-aftra.html.
[4] See What Legislation Protects Against Deepfakes and Synthetic Media?, HALOCK, https://www.halock.com/what-legislation-protects-against-deepfakes-and-synthetic-media/ (last visited Feb. 11, 2026) (listing recent federal and state legislation aimed at combating the rise of synthetic media).
[5] Paven Malhotra, Michelle Ybarra & Matan Shacham, Report on Deepfakes: What the Copyright Office Found and What Comes Next in AI Regulation, Reuters (Dec. 18, 2024, at 08:55 ET), https://www.reuters.com/legal/legalindustry/report-deepfakes-what-copyright-office-found-what-comes-next-ai-regulation-2024-12-18/.
[6] Stevie Bonifield, Youtube’s AI ‘Likeness Detection’ Tool Is Searching for Deepfakes of Popular Creators, The Verge (Oct. 21, 2025, at 17:17 ET), https://www.theverge.com/news/803818/youtube-ai-likeness-detection-deepfake.
[7] See Don Philmlee, Practice Innovations: Seeing Is No Longer Believing — the Rise of Deepfakes, Thomas Reuters (July 18, 2023), https://www.thomsonreuters.com/en-us/posts/technology/practice-innovations-deepfakes.
[8] Ben Gross, Unmasking Deepfakes: Navigating the Copyright Quagmire, CARDOZOAELJ (Apr. 5, 2024), https://cardozoaelj.com/2024/04/05/unmasking-deepfakes-navigating-the-copyright-quagmire/.
[9] See Id.
[10] Melissa Heikkilä, AI Companies Promised to Self-Regulate One Year Ago. What’s Changed?, MIT Tech. Rev. (July 22, 2024), https://www.technologyreview.com/2024/07/22/1095193/ai-companies-promised-the-white-house-to-self-regulate-one-year-ago-whats-changed/; see generally Justin B. Bullock et al., Public Opinion and The Rise of Digital Minds: Perceived Risk, Trust, and Regulation Support, 48 Pub. Performance & Mgmt. Rev. 1357 (2025) (studying how individuals with a greater trust in AI companies are less inclined to support regulation of AI).
[11] Guido Perboli, Nadia Simionato & Serena Pratali, Navigating the AI Regulatory Landscape: Balancing Innovation, Ethics, and Global Governance, 13 ECON. & Pol. Stud. 367, 377, 389 (2025) (discussing Google’s and Microsoft’s self-adopted AI regulation frameworks).
[12] Felipe Romero-Moreno, Deepfake Detection in Generative AI: A Legal Framework Proposal to Protect Human Rights, in 58 Computer Law & Security Review 1, 3, 10, 28 (2025).
[13] See David E. Weslow, Deepfakes, Deep Claims: Using Intellectual Property to Combat Artificial Intelligence’s Digital Deception, wiley (Nov. 24, 2025), https://www.wiley.law/article-Deepfakes-Deep-Claims-Using-Intellectual-Property-to-Combat-Artificial-Intelligences-Digital-Deception.
[14] See Zach Vallese, YouTube’s New AI Deepfake Tracking Tool Is Alarming Experts and Creators, CNBC (Dec. 2, 2025, at 11:06 ET), https://www.cnbc.com/2025/12/02/youtube-ai-biometric-data-creator-deepfake.html.
[15] See e.g. Barbara Ortutay,President Trump Signs Take It Down Act, Addressing Nonconsensual Deepfakes. What Is It?, AP News (May 20, 2025, at 16:08 ET), https://apnews.com/article/take-it-down-deepfake-trump-melania-first-amendment-741a6e525e81e5e3d8843aac20de8615 (discussing congressional efforts to address non-consensual AI-generated likenesses and the harms associated with deepfakes).
[16] See, e.g.,Cal. Civ. Code § 3344 (West) (establishing liability for any person who knowingly commercially uses another’s likeness without their consent).
[17] Amjad Hanif, New Tools to Protect Creators and Artists, YouTube Official Blog (Sept. 05, 2024), https://blog.youtube/news-and-events/responsible-ai-tools/.
[18] Katharine Trendacosta, Unfiltered: How YouTube’s Content ID Discourages Fair Use and Dictates What We See Online, ELEC. FRONTIER FOUND. (Dec. 10, 2020), https://www.eff.org/wp/unfiltered-how-youtubes-content-id-discourages-fair-use-and-dictates-what-we-see-online.
[19] Id.
[20] Id.
[21] Id.
[22] YouTube Content ID Explained for Video Creators, MUSICBED, https://www.musicbed.com/articles/resources/youtube-content-id/ (last visited Jan. 24, 2026).
[23] Sarah Perez, YouTube Expands Its ‘Likeness’ Detection Technology, Which Detects AI Fakes, to a Handful of Top Creators, TechCrunch (Apr. 9, 2025, at 09:45 PT), https://techcrunch.com/2025/04/09/youtube-expands-its-likeness-detection-technology-which-detects-ai-fakes-to-a-handful-of-top-creators/.
[24] Nick Warner, Exploring YouTube’s New Likeness Detection Tool, Hey Gen: Blog (Oct. 21, 2025) https://www.heygen.com/blog/exploring-youtube-likeness-detection.
[25] Id.
[26] Id.
[27] David K. Young, John Gardner & PJ Tabit, Me, Myself, and IP: AI and the Deepfake Problem, THE CONFERENCE BOARD (Nov. 05, 2025), https://www.conference-board.org/research/ced-policy-backgrounders/me-myself-and-ip-ai-and-the-deepfake-problem.
[28] Nathaniel Lacsina, YouTube Launches ‘Likeness-Detection’ to Help Creators Fight AI Impersonation, GULF NEWS (Oct. 22, 2025, at 15:47 ET), https://gulfnews.com/technology/media/youtube-launches-likeness-detection-to-help-creators-fight-ai-impersonation-1.500317044.
[29] Valerie C. Brannon & Eric N. Holmes, Section 230: An Overview, CONGRESS.GOV (Jan. 04, 2024), https://www.congress.gov/crs-product/R46751 (“Courts have interpreted Section 230 as creating broad immunity that allows the early dismissal of many legal claims against interactive computer service providers, preempting lawsuits and statutes that would impose liability based on third-party content”).
[30] See O’Handley v. Weber, 62 F.4th 1145, 1157–60 (9th Cir. 2023) (analyzing if a platform’s actions were coerced by a government official, leaving open the door for a First Amendment challenge to a platform’s moderation).
[31] Ashley Gold & Ina Fried, Grok’s Explicit Images Reveal AI’s Legal Ambiguities, AXIOS (Jan. 7, 2026), https://www.axios.com/2026/01/07/grok-bikini-images-legal-elon-musk.
[32] Anastasia Kozyreva et al., Resolving Content Moderation Dilemmas Between Free Speech and Harmful Misinformation, 120 Procs. Nat’l Acad. Sci. U.S. (2023), https://www.pnas.org/doi/epdf/10.1073/pnas.2210666120 %5Bhttps://doi.org/10.1073/pnas.2210666120%5D.
[33] See, e.g.,Initiative & Referendum Inst. v. Walker, 450 F.3d 1082, 1088 (10th Cir. 2006) (“This Court has recognized that a chilling effect on the exercise of a plaintiff’s First Amendment rights may amount to a judicially cognizable injury in fact, as long as it ‘arise[s] from an objectively justified fear of real consequences.’” (quoting D.L.S. v. Utah, 374 F.3d 971, 975 (10th Cir. 2004))).
[34] Thomas J. Cunningham & Michael J. McMorrow, Platforms Face Section 230 Shift from Take It Down Act, troutman pepper locke (June 9, 2025), https://www.troutman.com/insights/platforms-face-section-230-shift-from-take-it-down-act/.
[35] Manhattan Cmty. Access Corp. v. Halleck, 587 U.S. 802, 812 (2019).
[36] Packingham v. North Carolina, 582 U.S. 98, 99 (2017).
[37] Dawn Carla Nunziato, Protecting Free Speech and Due Process Values on Dominant Social Media Platforms, 73 Hastings L.J.1255, 1302 (2022).
[38] See generally Sarah A. Fisher, Jeffrey W. Howard & Beatriz Kira, Moderating Synthetic Content: The Challenge of Generative AI, 37 Phil. & Tech. 133 (examining AI content moderation challenges with respect to free speech).
[39] See Lacsina, supra note 28.
