Introduced October 28, 2025 by Joshua David Hawley · Last progress October 28, 2025
The bill prioritizes protecting minors and increasing accountability for AI chatbots—reducing harmful content and creating enforcement tools—at the cost of increased privacy tradeoffs, user friction, compliance burdens on providers, and potential limits on some legitimate uses.
Children and teens (and their families) will be less likely to encounter sexually explicit, manipulative, or otherwise harmful AI chatbot content because the bill requires age verification, account classification, and blocking of minor accounts for certain services.
Parents, families, and the public gain stronger safeguards and clearer accountability because the bill recognizes developmental vulnerabilities, requires operator safeguards and disclosures, and enables legal/regulatory action against unsafe chatbot behavior toward minors.
All users (especially young adults and caregivers) get clearer safety information because chatbots must disclose they are AI and not licensed professionals, reducing the risk of relying on unqualified advice.
All users (including adults) will face mandatory account creation and stronger age-verification (potentially requiring government ID or authenticated proof), increasing friction, privacy tradeoffs, and deterring beneficial anonymous use of chatbots.
Covered companies will incur substantial compliance costs (verification systems, encryption, audits, blocking mechanisms) and may face fines, and those costs are likely to be passed to consumers or reduce free services—harming small businesses and taxpayers.
Prohibiting chatbots from providing professional medical, legal, financial, or psychological services could remove convenient access to general guidance that many users rely on and shift demand to paid professionals.
Based on analysis of 8 sections of legislative text.
Requires verified accounts and reliable age checks for AI companions, blocks minors, mandates AI disclosures, protects verification data, and creates civil penalties for violations.
Requires companies that offer AI chatbots that simulate friendship, companionship, or therapeutic conversation to require verified user accounts, use reliable age-verification methods, and block minors from accessing such "AI companions." The bill also requires clear disclosures that the system is an AI (not human or a licensed professional), limits collection and sharing of age-verification data, freezes unverified existing accounts, and creates civil enforcement by the U.S. Attorney General and state attorneys general with penalties and rulemaking authority.