Introduced December 2, 2025 by Edward John Markey · Last progress December 2, 2025
The bill substantially strengthens consumer protections, anti-discrimination rules, transparency, and enforcement for consequential automated decision systems, but does so by imposing significant compliance, disclosure, and litigation obligations that could slow deployment, raise costs, and disproportionately burden smaller firms.
Consumers (including racial/ethnic minorities, people with disabilities, low-income people) gain stronger protections from harmful automated decisions because covered algorithms are broadly defined, must undergo pre-deployment impact assessments and mitigation, and impacted individuals can get human alternatives and appeal automated decisions.
Individuals and the public gain new enforcement and remedy paths—FTC rulemaking and enforcement authority, private rights of action (including treble/statutory damages), and state attorney general suits—improving chances of redress for unlawful algorithmic practices.
Transparency and accountability increase because independent audits, public summaries, standardized short-form notices, and a public repository of pre-deployment evaluations let researchers, journalists, and the public scrutinize algorithmic risks and compliance over time.
Developers and deployers (especially small and mid-size firms) face substantial new compliance costs—technical mitigation, independent audits, long record-retention, translated disclosures, and contractual obligations—which will likely raise prices, reduce product availability, and slow innovation.
Startups and resource-constrained deployers are disproportionately burdened by audit, reporting, and retention requirements, risking reduced competition, market concentration, and fewer new entrants.
Mandatory evaluations, prohibitions on off-label uses, and slow audit processes could delay deployment of beneficial algorithmic tools (hiring, credit, healthcare), reducing timely access to services for job-seekers, consumers, and businesses.
Based on analysis of 12 sections of legislative text.
Requires companies that develop or use certain AI and algorithmic systems to test for and prevent harms, especially discriminatory or disparate impacts, before use and on an ongoing basis. It creates duties for developers and deployers to perform pre-deployment evaluations and annual impact assessments, to hire or provide access to independent auditors, to publish accessible disclosures, and to keep records for at least 10 years. Gives the Federal Trade Commission broad rulemaking and enforcement powers (including civil penalties and private rights of action), expands which entities the FTC can reach, directs coordination with other federal agencies and state attorneys general, funds staffing and studies (including a new federal algorithm-auditing occupational series), and sets deadlines for several studies, repository creation, and rulemakings.