AI Revenge Porn Case Shakes Feds

Blue code text with skull shape in center.

Washington just proved it can move fast—when tech-powered sexual harassment crosses the line into a federal crime with real consequences.

Quick Take

  • An Ohio man became the first person convicted under the 2025 Take It Down Act after using AI-generated sexually explicit images to harass and intimidate women.
  • Federal prosecutors say the case signals aggressive enforcement against non-consensual deepfake pornography, cyberstalking, and related digital forgeries.
  • The law adds a nationwide baseline that many states lacked, while also requiring platforms to remove reported intimate imagery within 48 hours.
  • The case highlights a growing tension: protecting privacy and victims without expanding government power in ways that chill lawful speech.

First Take It Down Act conviction sets a federal precedent

Federal prosecutors in the Southern District of Ohio announced the first U.S. conviction under the Take It Down Act after James Strahler, 37, of Upper Arlington, pleaded guilty in U.S. District Court in Columbus on April 7, 2026. Authorities said Strahler used AI tools to generate sexually explicit images as part of a harassment campaign targeting women. The guilty plea also included cyberstalking and other charges tied to digital forgeries and exploitation-related imagery, with sentencing still pending.

Investigators said Strahler had extensive AI capability on his phone, including more than two dozen AI platforms and roughly 100 AI models. That detail matters because it shows how easy it has become to weaponize consumer-grade tools to fabricate explicit content that looks real enough to intimidate victims, damage reputations, and trigger workplace and family fallout. Prosecutors framed the case as a warning: federal law enforcement now has a specific statute designed for this modern form of abuse.

What the Take It Down Act actually does—and why it was passed

Congress enacted the Take It Down Act in 2025, and President Donald Trump signed it into law, creating the first broad federal framework aimed at non-consensual intimate imagery, including AI-generated deepfakes. The statute criminalizes the non-consensual sharing of intimate images and threats to publish them, with higher penalties when minors are involved. It also includes civil remedies and, critically, pushes platforms toward rapid response by requiring content removal within 48 hours after notification.

Before the law, Americans lived under a patchwork: some states had strong revenge-porn statutes and others had gaps, leaving victims to fight slow battles across jurisdiction lines while harmful content spread. Legal scholarship and court decisions have also emphasized that these cases can’t simply be treated as “obscenity” and ignored as a First Amendment exception. Instead, states and now the federal government have tried to craft narrow laws that target conduct and harm—harassment, coercion, privacy violations—without banning broad categories of speech.

Enforcement signals: prosecutors, platforms, and public pressure

U.S. Attorney Dominick Gerace said the office would not tolerate the creation or distribution of AI-generated intimate images without consent, emphasizing the government’s intent to use every available tool. Melania Trump amplified the message publicly, praising the first conviction and framing it as a measure that protects people from non-consensual AI-generated sexually explicit imagery, cyberstalking, and related threats. High-profile attention matters because it pressures tech companies and signals to local prosecutors that Washington expects follow-through.

The bigger issue: protecting victims without building a speech-policing machine

Conservatives who distrust “deep state” overreach still have a practical reason to watch this law closely: once the federal government expands its role online, the same enforcement machinery can be redirected toward political speech if future leaders choose. That concern is not theoretical in an era when tech platforms and regulators have both faced criticism for viewpoint-driven moderation. The best safeguard is strict focus on consent, intent, and demonstrable harm—along with transparent procedures—so the law targets coercion and stalking, not unpopular opinions.

At the same time, the case underscores a rare area where many Americans—including people angry about elite failures—agree government should act: protecting ordinary citizens from technologically amplified exploitation. The Take It Down Act sets a clear national baseline and puts platforms on notice that “move fast and break things” doesn’t work when the product is humiliation and fear. What remains unknown, for now, is how consistently the law will be enforced and whether courts will treat it as a narrow tool—or the start of broader federal policing of online life.

Sources:

Melania Trump hails first conviction in US under Take It Down Act; what is the federal revenge porn law? Explained.

An Update on the Legal Landscape of Revenge Porn

University of Miami Law Review, Vol. 70, Iss. 1, Article 9