Content Origin Protection and Integrity from Edited and Deepfaked Media Act of 2025
- senate
- house
- president
Last progress April 9, 2025 (8 months ago)
Introduced on April 9, 2025 by Maria E. Cantwell
House Votes
Senate Votes
Read twice and referred to the Committee on Commerce, Science, and Transportation.
Presidential Signature
AI Summary
This bill aims to make it easier to tell when online photos, videos, audio, or text were made or edited by AI, and to protect creators’ work from being misused. It tells a federal standards agency (NIST) to work with industry to set clear standards for watermarks, labels, and tools that can spot AI-made media, run research to make these tools stronger, and launch a public education campaign within one year to help people understand deepfakes and other synthetic media . Congress says the public—and especially artists, journalists, and publishers—are being hurt by a lack of transparency about how AI systems are trained and how synthetic content is made.
Two years after the law takes effect, companies that offer tools for creating AI content or heavily editing media must let users attach a built‑in label (called “content provenance information”) and, if a user chooses to add it, make that label machine‑readable and hard to remove . It becomes unlawful to remove or tamper with these labels to mislead people, and major online platforms (like social networks, video sites, and search engines) can’t strip them off, except for limited security testing . If a work is labeled, others can’t use it to train AI or to generate synthetic media without the owner’s clear, informed permission and following any terms, including payment. The Federal Trade Commission and state attorneys general can enforce these rules, and creators can sue for damages and legal costs; you have up to four years from when you learn about a violation to bring a claim . The bill does not change any existing copyright rights.
-
Who is affected
- Makers of AI content tools and apps; large websites and apps such as social networks, video‑sharing sites, and search engines .
- Creators and publishers who can label their work and take action if labels are stripped or their labeled work is used to train AI without consent .
-
What changes
- National standards, research, and public education on watermarks, labels, and AI‑content detection.
- Tools must offer labeling and make labels hard to remove; platforms can’t strip them, except for limited security testing .
- Labeled works can’t be used for AI training or generation without the owner’s informed consent and agreed terms, including pay.
- FTC, states, and creators can enforce; creators can sue within four years of discovering a violation .
-
When
- Public education campaign: within 1 year of enactment.
- Labeling features in AI and media‑editing tools: required starting 2 years after enactment.