The bill strengthens protections, transparency, and enforcement against harmful or discriminatory algorithmic decisions — improving recourse for individuals — but does so by imposing substantial compliance, reporting, and litigation costs that may slow innovation, consolidate markets, and raise prices for consumers.
People who interact with consequential automated systems — especially racial and ethnic minorities, people with disabilities, and low-income individuals — will face fewer discriminatory or harmful outcomes because developers and deployers must identify and fix harms, justify practices that cause disparate impacts, provide human alternatives/appeals, and meet performance and data-use standards.
Regulators, researchers, and the public will gain much greater transparency and accountability because covered algorithms must undergo independent audits, pre-deployment evaluations, public summaries, recordkeeping, and repository reporting that enable oversight and trend analysis.
Consumers (including taxpayers, immigrants, and people with disabilities) will have stronger privacy and notice protections because developers must protect de‑identified data, commit not to re‑identify it, provide clear, accessible disclosures in multiple languages, and notify individuals of material changes.
Small and large developers and deployers will face substantial new compliance costs (audits, pre‑deployment evaluations, reporting, independent auditors, recordkeeping), which are likely to be passed on to consumers or slow product rollout.
Broad and sometimes subjective standards (e.g., wide definitions of 'consequential action,' 'covered algorithm,' and 'unlikely to cause harm') and open-ended FTC rulemaking create legal and regulatory uncertainty for developers about scope and compliance.
Expanded private lawsuits, treble or statutory damages, elimination of pre-dispute arbitration/class-waivers, and state AG enforcement increase litigation risk and legal costs for companies, potentially reducing employment or services.
Based on analysis of 12 sections of legislative text.
Requires audits, impact assessments, public disclosures, mitigation, and FTC enforcement for commercial algorithms that affect employment, housing, credit, health, justice, elections, and similar consequential actions.
Introduced December 2, 2025 by Edward John Markey · Last progress December 2, 2025
Creates a federal framework requiring companies that develop or deploy automated decision-making systems that affect important areas (like jobs, housing, credit, health care, elections, and the justice system) to perform pre-deployment audits, yearly impact assessments, public disclosures, and harm mitigation. It gives the Federal Trade Commission primary enforcement authority, allows state attorneys general to sue, funds agency implementation, and creates federal auditor positions to support algorithm audits.