Read twice and referred to the Committee on Commerce, Science, and Transportation.
Creates a conditional federal safe harbor that shields AI developers from certain civil liability when a licensed or otherwise qualified "learned professional" uses the AI in providing professional services, but only if the developer meets detailed transparency, documentation, and update requirements. The law defines key terms (AI, developer, client, error, learned professional, model card, model specification), requires public model documentation with limited permissible redactions, mandates timely updates after deployments or discovery of new failure modes, excludes immunity for recklessness, willful misconduct, fraud, or non‑professional uses, and preempts state-law claims only for developers who satisfy the statute's conditions. The Act takes effect December 1, 2025 and applies to conduct on or after that date.
AI systems have rapidly advanced and are increasingly being used in professional services, including healthcare, law, finance, and other sectors critical to the economy.
Industry leaders have publicly acknowledged development of more powerful AI systems and have discussed the potential for artificial general intelligence and superintelligence that could fundamentally reshape U.S. society.
Lack of clarity about liability for AI errors creates uncertainty that impedes responsible integration of AI into professional services and economic activity.
Many AI systems operate with limited transparency about their capabilities, limitations, and default instructions, making it hard for professional users to assess appropriate use and for legal systems to allocate responsibility when errors occur.
Learned professionals who use AI tools have professional obligations to understand those tools' capabilities and limitations and therefore need clear information about system specifications and performance.
Who is affected and how:
Developers and deployers of AI models: Directly affected. They can obtain conditional immunity from certain civil suits if they publish and maintain detailed model documentation, disclose limitations and known failure modes, and update records promptly. Compliance will impose ongoing documentation, monitoring, and potential legal-review costs; smaller developers may face disproportionate burdens.
Learned professionals (doctors, lawyers, financial advisors and other licensed or qualified professionals): Affected as primary users whose use of AI can trigger the safe harbor for developers. Professionals will rely on model documentation to understand limitations and appropriate use; their malpractice risk may still exist depending on whether they followed guidance and standards of care.
Clients and consumers: Indirectly affected. The law aims to improve transparency about AI limitations which could improve safety, but the safe harbor may make it harder for some plaintiffs to bring claims against developers when professionals use AI appropriately. Claims against professionals or for fraud remain available.
Courts and state tort systems: Affected through conditional federal preemption of state-law claims for compliant developers. Courts will need to interpret whether developers met statutory conditions, whether conduct falls within exclusions (fraud, recklessness), and how preemption applies to state causes of action.
Insurers and professional liability markets: Affected because liability exposure for developers may fall, while exposure for professionals could change depending on how negligence and standard-of-care cases are prosecuted; insurers will reassess underwriting and coverage language.
Innovation ecosystem and public interest groups: The law incentivizes documentation and transparency, which could foster safer AI use and faster detection of failure modes. Critics may argue it tilts protection toward developers and could weaken remedies for people harmed by AI-driven errors unless fraud or willful misconduct is shown.
Net effect: The statute creates incentives for transparency and ongoing monitoring by tying significant liability relief to concrete disclosure duties. It reduces certain developer liabilities when learned professionals use the tools properly, but preserves remedies for egregious misconduct and non‑professional harms. Implementation and judicial interpretation will determine how broadly protections operate in practice and whether smaller developers face disproportionate compliance costs.
Last progress June 12, 2025 (8 months ago)
Introduced on June 12, 2025 by Cynthia M. Lummis