Last progress June 12, 2025 (5 months ago)
Introduced on June 12, 2025 by Cynthia M. Lummis
Read twice and referred to the Committee on Commerce, Science, and Transportation.
This bill sets rules for when AI makers are protected from lawsuits if their tools are used by trained professionals to serve clients. To qualify, developers must share clear public info about each AI system, including a “model card” and a “model specification,” and give plain guidance to professional users about what the AI can and cannot do, where it may fail, and the right ways to use it. The goal is to boost transparency and set fair responsibility as AI spreads into fields like health care, law, and finance .
This protection does not cover reckless or intentional wrongdoing. Developers must also keep their public info up to date within 30 days when they release a new version or find a new, important failure risk; if they don’t and someone is harmed afterward, they lose protection. State claims are blocked when the protection applies, but claims for fraud, knowing lies, or uses outside professional settings are still allowed. Other legal protections that already exist are unchanged. The law takes effect December 1, 2025, and applies to actions on or after that date .