The bill strengthens rights and remedies for people harmed by nonconsensual intimate deepfakes—providing recognition, injunctive relief, long statutes of limitation, and substantial monetary damages—while creating risks of broad liability, chilling speech and research, increased moderation and privacy trade-offs, and legal uncertainty for creators, platforms, and courts.
People targeted by nonconsensual sexually intimate deepfakes (including women, young adults, and low-income victims) can sue in federal court and obtain substantial monetary damages and attorneys' fees, providing direct financial relief and deterrence against malicious dissemination.
People targeted by AI/ML-generated intimate digital forgeries are explicitly recognized and covered by the law, closing a protection gap and strengthening privacy and identity rights for victims of deepfakes.
Identifiable victims can obtain equitable relief (injunctions and court orders to delete or stop displaying intimate images/forgeries), helping restore privacy and limit further spread of harmful content.
People who publish images or create content (including tech workers and small creators) face large statutory damages ($150K–$250K) and liability, which could impose heavy financial burdens and chill lawful speech and creative activity.
Researchers, journalists, satirists, and other free‑speech actors may be chilled because the bill's broad definition treats images as 'intimate digital forgeries' regardless of disclaimers or claimed inauthenticity, making legitimate parody, research, or commentary riskier.
Enforcement and moderation to stop deepfakes may require increased monitoring of online content, raising privacy concerns and potential government or platform overreach for ordinary online users.
Based on analysis of 4 sections of legislative text.
Adds definitions and a federal civil remedy allowing people targeted by non‑consensual sexually intimate deepfakes to sue creators, possessors with intent to disclose, and disclosers/solicitors.
Creates a federal civil remedy and new legal definitions so people targeted by non-consensual, sexually intimate deepfake images or edits can sue creators, possessors who intended disclosure, and those who disclose or solicit such forgeries. Defines key terms (like “identifiable individual” and “intimate digital forgery”), affirms harms caused by intimate forgeries, clarifies that labels or context claiming an image is fake do not defeat the definition, and preserves other laws such as intellectual property rules. Also updates existing federal language about intimate-image disclosures, expands covered categories to explicitly include identifiable individuals shown in sexually explicit conduct, and includes a severability rule so remaining provisions stand if part of the law is struck down.
Introduced May 21, 2025 by Richard Joseph Durbin · Last progress January 13, 2026