*Tyler Konigsberg
I. Introduction
Artificial intelligence has made it possible to generate fake but realistic intimate images from ordinary photographs.[1] These “deepfakes” spread quickly through social media and private messaging, leaving victims little ability to stop their circulation.[2] The images cause severe harm, and platforms often fail to stop them.[3] Schools across the country are now experiencing this crisis firsthand.[4] As of 2023, up to 98% of deepfake videos online contain sexually explicit intimate depictions.[5]
In Baltimore County, Maryland, an athletic director used AI audio tools to fabricate a recording that falsely depicted the principal making racist and antisemitic remarks.[6] In Westfield, New Jersey, students produced nude deepfakes of their female classmates.[7] In Houston, Texas, a student created explicit images of a teacher and shared them online.[8] In Laguna Beach, California, police and the district investigated AI-generated nude images of students that were shared among classmates.[9] These cases are not isolated, and they highlight a growing problem facing schools nationwide.[10]
The Center for Democracy & Technology (CDT) found that 39% of students and 29% of teachers knew of a deepfake depicting individuals associated with their school being shared during the 2023–2024 school year.[11] CDT estimates this equals about 5.97 million high school students exposed to non-consensual intimate imagery (NCII), including approximately 2.30 million exposed to deepfake NCII.[12]
II. The TAKE IT DOWN Act Requires Platforms to Remove NCII Within 48 Hours and Creates Seven New Offenses
In May 2025, Congress enacted the TAKE IT DOWN Act, criminalizing the publication of authentic or AI-generated NCII and requiring covered platforms to remove reported images within 48 hours.[13] The statute adopts the “interactive computer service” definition from section 230 of the Communications Decency Act, but section 230’s core immunity remains intact, leaving open whether the removal remedy will be enforceable and provide real relief.[14] Section 230(c) shields platforms from being treated as publishers of user content[15] and provides immunity from liability when they remove or fail to remove material in good faith.[16] Sexual digital forgeries are digitally manipulated or AI-generated sexual images created or distributed without consent, representing a theft of identity and a violation of privacy, dignity, and autonomy—essentially, a new form of voyeurism facilitated by technology.[17]
The TAKE IT DOWN Act creates new federal crimes for authentic images and for digital forgeries, establishing seven offense types: (1) publishing authentic depictions of adults;[18] (2) publishing authentic depictions of minors;[19] (3) publishing digital forgeries of adults;[20] (4) publishing digital forgeries of minors;[21] (5) threats involving authentic depictions;[22] (6) threats involving digital forgeries of adults;[23] and (7) threats involving digital forgeries of minors.[24] Penalties for publication offenses are up to two years of imprisonment when the material involves adults,[25] and up to three years of imprisonment for material involving minors.[26] Threats involving digital forgeries carry up to eighteen months when directed at adults,[27] and up to thirty months when directed at minors.[28] By May 19, 2026, platforms must implement a notice process allowing victims to request removal of nonconsensual intimate images.[29] Within 48 hours of receiving a valid request, platforms must remove the depiction within 48 hours[30] and make reasonable efforts to remove known copies.[31] The Act pairs broad new criminal prohibitions with strict platform duties, but its effectiveness remains uncertain given the continuing shield of § 230.
III. Courts interpret carve-outs narrowly, and Section 230 immunity may still shield platforms from liability even when they delay removals.
Since § 230 immunizes interactive computer services (ICSs) when they are sued for the activity of others on their platforms,[32] covered platforms retain broad protection even under the TAKE IT DOWN Act.[33] The statute provides two forms of immunity.[34] First, under § 230(c)(1), platforms are not treated as the publisher or speaker of third-party content.[35] This has been interpreted to bar claims based on a failure to remove material.[36] Second, under § 230(c)(2), platforms are shielded from liability for actions voluntarily taken in good faith to restrict or disable access to objectionable material, even if that material is later found lawful.[37] A recent Ninth Circuit case underscores this point.[38]
In August 2025, in Doe 1 v. Twitter, Inc., the Ninth Circuit considered claims that Twitter knowingly benefited from a sex-trafficking venture by failing to promptly remove videos depicting them as minors engaged in sexual activity.[39] The court rejected the plaintiffs’ claim, stating that “any activity that can be boiled down to deciding whether to exclude material that third parties seek to post online is perforce immune under § 230, absent the exception set forth in [FOSTA],” which makes it a crime to engage in sex trafficking.[40] The court acknowledged the logic that continuing to make available known child pornography “is tantamount to facilitating sex trafficking, [but] that reasoning fails under our prior holding that merely turning a blind eye to illegal revenue-generating content does not establish criminal liability under § 1591.”[41] Platforms will likely retain baseline § 230 immunity for user content, since the Act does not repeal it, and enforcement authority lies with the FTC.[42] Courts are unlikely to expand liability without congressional action.
IV. Conclusion
The Act’s 48-hour rule may speed up removal of harmful content, but enforcement is uncertain because § 230 immunity still shields platforms that delay.[43] Victims have no private right of action, so regulation is left to the FTC, while Platforms may claim they are intermediaries, not publishers, and thus immune.[44] The Act takes effect in 2026, and enforcement may trigger a scandal within the first few months as platforms respond to the new law. Congress, which has scrutinized this issue before, could hold another high-profile hearing if platforms fail to comply.[45] Such a hearing could be further intensified by election-season pressures, particularly if the issue gains public traction.[46]
*Tyler Konigsberg is a second-year student at the University of Baltimore School of Law, where he serves as a Staff Editor on the University of Baltimore Law Review and as a Teaching Assistant for both Introduction to Lawyering Skills and Civil Procedure. He is also a Distinguished Scholar of the Royal Graham Shannonhouse III Honor Society. Tyler earned his Bachelor of Science in Business Administration, magna cum laude, from Babson College, where he concentrated in economics and finance. He plans to build a career at the intersection of business and law.
[1] Stephanie Sy & Andrew Corkery, How AI Is Being Used to Create Explicit Deepfake Images That Harm Children, PBS News (Mar. 22, 2025, at 17:35 ET), https://www.pbs.org/newshour/show/how-ai-is-being-used-to-create-explicit-deepfake-images-that-harm-children.
[2] U.S. Dep’t of Homeland Sec., Increasing Threats of Deepfake identities 17, 33 (2021), https://www.dhs.gov/sites/default/files/publications/increasing_threats_of_deepfake_identities_0.pdf.
[3] Emmet Lyons, Meta Failing to Curb Spread of Many Sexualized AI Deepfake Celebrity Images on Facebook, CBS News (Feb. 17, 2025, at 14:21 ET), https://www.cbsnews.com/news/meta-facebook-sexualized-ai-deepfake-celebrity-images-spread/.
[4] Ben Finley, Athletic Director Used AI to Frame Principal with Racist Remarks in Fake Audio Clip, Police Say, AP News (Apr. 25, 2024, at 18:33 ET), https://apnews.com/article/ai-artificial-intelligence-principal-audio-maryland-baltimore-county-pikesville-853ed171369bcbb888eb54f55195cb9c; Tim McNicholas, New Jersey High School Students Accused of Making AI-Generated Pornographic Images of Classmates, CBS News (Nov. 2, 2023, at 19:32 ET), https://www.cbsnews.com/newyork/news/westfield-high-school-ai-pornographic-images-students/; Matthew Seedorff, Houston-Area Student Accused of Creating ‘Deep Fake’ Explicit Photos of Teacher, Sharing Them Online, FOX 26 Hou. (Apr. 13, 2023, at 21:04 CT), https://www.fox26houston.com/news/houston-area-student-accused-of-creating-deep-fake-explicit-photos-of-teacher-sharing-them-online; David González, Laguna Beach HS Investigating Incident Involving AI-Generated Nude Photos of Students, ABC 7 (Apr. 1, 2024), https://abc7.com/post/laguna-beach-high-school-investigating-incident-involving-ai-generated-nude-photos-of-students/14603765/.
[5] Elizabeth Laird, Maddy Dwyer & Kristin Woelfel, Ctr. for Democracy & Tech., In Deep Trouble: Surfacing Tech-Powered Sexual Harassment in K-12 Schools 13 (2024), https://cdt.org/wp-content/uploads/2024/09/2024-09-26-final-Civic-Tech-Fall-Polling-research-1.pdf.
[6] Finley, supra note 4.
[7] McNicholas, supra note 4.
[8] Seedorff, supra note 4.
[9] González, supra note 4.
[10] Olina Banerji, Why Schools Need to Wake Up to the Threat of AI ‘Deepfakes’ and Bullying, Edu. Week (Dec. 11, 2024), https://www.edweek.org/technology/why-schools-need-to-wake-up-to-the-threat-of-ai-deepfakes-and-bullying/2024/12.
[11] Laird, Dwyer & Woelfel, supra note 5, at 10.
[12] Id. at 11.
[13] TAKE IT DOWN stands for Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act. TAKE IT DOWN Act, Pub. L. No. 119-12, § 3(a)(3)(A), 139 Stat. 55, 60 (2025) (codified at 47 U.S.C. § 223) (requiring a covered platform, upon receiving a valid removal request, to “remove the intimate visual depiction . . . no[] later than 48 hours after receiving such request.”).
[14] 47 U.S.C. § 223(h)(1)(D) (“The term ‘interactive computer service’ has the meaning given the term in section 230.”).
[15] Id. § 230(c); Doe 1 v. Twitter, Inc., 148 F.4th 635, 643 (9th Cir. 2025).
[16] Doe 1 v. Twitter, Inc., 148 F.4th 635, 643 (9th Cir. 2025).
[17] Clare McGlynn & Rüya Tuna Toparlak, Why We Need to Talk About Sexual Digital Forgeries, Not “Deepfake Porn”, J. L. & Soc’y (Durham U. & U. of Lucerne, Centre of L. & Soc’y Blog, Dec. 2023), https://journaloflawandsociety.co.uk/blog/why-we-need-to-talk-about-sexual-digital-forgeries-not-deepfake-porn/.
[18] 47 U.S.C. § 223(h)(2)(A).
[19] Id. § 223(h)(2)(B).
[20] Id. § 223(h)(3)(A).
[21] Id. § 223(h)(3)(B).
[22] Id. § 223(h)(6)(A).
[23] Id. § 223(h)(6)(B)(i).
[24] Id. § 223(6)(B)(ii).
[25] Id. § 223(h)(4)(A).
[26] Id. § 223(h)(4)(B).
[27] Id. § 223(h)(6)(B)(i).
[28] Id. § 223(h)(6)(B)(ii).
[29] Id. § 223a(a)(1)(A) (requiring compliance “no[] later than 1 year after the date of enactment,” which was May 19, 2025).
[30] Id. § 223a(a)(3)(A).
[31] Id. § 223a(a)(3)(B).
[32] 47 U.S.C. § 230(c)(1).
[33] Anderson v. TikTok, Inc., 116 F.4th 180, 183 (3d Cir. 2024); Id. § 230(c).
[34] 47 U.S.C. § 230(c)(1), (2).
[35] Id. § 230(c)(1).
[36] Id. § 230(c)(1); see also Force v. Facebook, Inc., 934 F.3d 53, 65 (2d Cir. 2019).
[37] 47 U.S.C. § 230(c)(2); see also Free Speech Coal., Inc. v. Paxton, 606 U.S. 461, 475 n.4 (2025).
[38] Free Speech Coal, 606 U.S. at 475 (discussing the protections of 47 U.S.C. § 230(c)(2)).
[39] Doe 1 v. Twitter, Inc., 148 F.4th 635, 643 (9th Cir. 2025).
[40] Id.
[41] Id. at 644; see also 18 U.S.C. § 1591 (defining the federal offense of sex trafficking and setting forth the elements of criminal liability).
[42] 47 U.S.C. § 223a(b); see also 15 U.S.C. § 57a(a)(1)(B) (granting the Federal Trade Commission rulemaking and enforcement authority under the Federal Trade Commission Act).
[43] 47 U.S.C. § 230(c).
[44] Alan Z. Rozenshtein, Interpreting the Ambiguities of Section 230, Brookings (Oct. 26, 2023), https://www.brookings.edu/articles/interpreting-the-ambiguities-of-section-230/.
[45] See generally Does Section 230’s Sweeping Immunity Enable Big Tech Bad Behavior?: Hearing Before the S. Comm. on Com., Sci., & Transp., 116th Cong. (2020).
[46] Id.
