Introduced June 25, 2025 by Ronald Lee Wyden · Last progress June 25, 2025
The bill aims to reduce harm from automated decision systems by requiring transparency, impact assessments, anti‑bias testing, and a beefed‑up FTC enforcement apparatus, but it will impose substantial compliance costs, data‑disclosure and privacy risks, and relies on interagency/state coordination and resource decisions that could produce uneven implementation and commercial burdens.
Consumers (including people from protected groups) will face fewer harmful or biased automated decisions because covered entities must perform impact assessments, mitigate likely material negative impacts, retain documentation, and provide notice/appeal/opt-out mechanisms.
Protected groups and the public will see reduced discrimination risk because entities must evaluate differential performance by race, gender, age, disability, and other characteristics and document mitigation steps.
Consumers and the public benefit from stronger federal enforcement capacity: a dedicated FTC Bureau and hundreds of specialized staff increase technical review and the likelihood of investigations and remedies for harmful automated decision systems.
Small businesses, startups, and many covered organizations will face substantial new compliance, reporting, testing, documentation, and retention costs that could raise prices, reduce services, and hinder competition or innovation.
Broad statutory definitions and sweeping scope risk sweeping many systems into coverage, creating regulatory uncertainty, potential overreach, and legal disputes about which systems must comply.
Disclosure requirements and the public repository risk exposing technical metadata or datasets that enable reverse‑engineering or reveal trade secrets, harming competitiveness and commercial value for covered companies.
Based on analysis of 11 sections of legislative text.
Requires companies that deploy AI systems for high‑stakes decisions to run and keep impact assessments, submit summary reports to the FTC, and follow FTC rules with FTC enforcement.
Requires companies that build or deploy AI or automated decision systems used to make high‑stakes decisions (like in employment, housing, healthcare, finance, education, and utilities) to perform risk and impact assessments, keep documentation, and submit summary reports to the Federal Trade Commission (FTC). The FTC must write rules, publish an annual machine‑readable report and a public repository of limited metadata, create a Technology Bureau to support enforcement, and may enforce violations under existing FTC authority; state attorneys general may bring related civil actions. Creates detailed definitions and thresholds for which entities are covered, mandates stakeholder consultation, testing, privacy/security evaluations, transparency and remediation steps, and requires retention of assessment records; it also preserves state and local laws and authorizes appropriations and staffing to implement and enforce the requirements.