Introduced October 9, 2025 by Edward John Markey · Last progress October 9, 2025
The bill strengthens clinician authority, worker voice, and oversight to protect patient safety when AI is used in care, but does so at the cost of substantial compliance burdens, heightened litigation and penalty exposure for providers and some public entities, and new privacy and enforcement trade-offs.
Healthcare professionals (and their patients) can override AI clinical decision support without employer retaliation: clinicians keep final clinical authority, have whistleblower/redress avenues, and are protected from adverse employment actions when they timely override AI recommendations.
Workers gain formal participation, training, and clearer rules for AI use: required AI/CDSS committees with labor representation, mandated paid training on AI limits/override procedures, and federal educational materials increase transparency and safer use of AI in clinical settings.
New federal and state enforcement pathways and outreach: state attorneys general and privacy regulators can sue for violations while HHS (with Labor) will produce compliance materials and can coordinate federal enforcement to obtain remedies for harmed residents.
Hospitals, clinics, insurers and other covered entities will face substantial new compliance, reporting, recordkeeping, training, and administrative costs to implement policies, track AI outputs/overrides, run committees, and respond to investigations.
Increased litigation and financial liability risk: a private right of action, statutory and treble damages, civil monetary penalties, attorney-fee awards, and state recovery authority expose employers (and some public entities) to significant lawsuits and financial penalties.
Limits on data sharing and prohibition on DOJ criminal referrals could hinder internal accountability and reduce criminal deterrence for bad-faith AI misuse, complicating peer review and disciplinary processes.
Based on analysis of 9 sections of legislative text.
Requires health care organizations using AI decision tools to protect clinician overrides, forbid retaliation, set policies/committees, train staff, limit sharing of identifiable override data, and creates federal enforcement.
Requires health care organizations that use AI clinical decision support systems to protect clinicians who override AI outputs, forbid retaliation or adverse employment actions for overrides, and adopt policies, training, feedback channels, and oversight committees. It gives federal agencies (HHS and the Department of Labor) authority to enforce these rules, allows state attorneys general to sue, and limits sharing of override data that identifies clinicians.