This bill significantly strengthens transparency, enforcement, and protections against discriminatory or harmful high‑stakes algorithmic decisions—giving individuals more rights and regulators more tools—but does so at the cost of substantial compliance burdens, legal uncertainty, potential competitive harms for smaller firms, and increased government enforcement activity.
Millions of consumers—especially racial and disability-protected groups—gain stronger protection from discriminatory or unfair outcomes because high‑stakes algorithms (employment, credit, housing, justice, benefits, healthcare, elections) are covered, prohibited when they cause disparate impact, and must be tested and mitigated before and during use.
Individuals gain greater enforcement remedies and accountability through expanded federal and state tools (FTC authority, state attorneys general injunctions/penalties, private suits) and increased Commission staffing, meaning more capacity to stop and remedy harmful algorithmic practices.
People get clearer, accessible information and rights about automated decisions—required disclosures, multilingual and disability‑accessible notices, public impact summaries, retention of disclosures, and appeals/human‑alternative options—improving transparency and individual control.
Developers, deployers, and many businesses—especially small firms and startups—face substantial new compliance, auditing, reporting, and record‑retention costs that could be passed to consumers or push smaller competitors out of the market.
Broad and flexible definitions (e.g., 'consequential action', 'collect'), plus FTC rulemaking authority to expand coverage, create regulatory uncertainty and the risk of unpredictable, shifting obligations for developers, deployers, states, and businesses.
Compliance burdens and audit/certification requirements are likely to chill innovation and concentrate market power with larger firms that can absorb costs, reducing competition and slowing some beneficial deployments.
Based on analysis of 12 sections of legislative text.
Requires audits, transparency, anti-discrimination rules, and FTC/state enforcement for algorithms used in consequential decisions, plus federal support for algorithm auditors.
Introduced December 2, 2025 by Yvette Diane Clarke · Last progress December 2, 2025
Requires companies and other entities that develop or use algorithms that make important "consequential" decisions (for example about hiring, housing, benefits, elections, law enforcement, credit, or health care) to run rigorous pre-deployment and annual post-deployment audits, prevent discriminatory outcomes, publish plain-language disclosures, consult impacted communities, and fix identified harms. Enforcement is through the Federal Trade Commission (with expanded jurisdiction to some previously excluded entities) and by state attorneys general, with civil penalties, reporting and recordkeeping rules, and deadlines for agency rulemaking and implementation. Also creates federal support for algorithm auditing by directing the Office of Personnel Management to establish a new federal occupational series for algorithm auditors and allowing the FTC to hire additional staff to carry out enforcement and rulemaking duties.