The bill increases protections for minors using chatbots—via disclosure, content limits, break prompts, crisis resources, and federal study—while creating compliance costs, enforcement uncertainty, and the risk of weakening stronger state-level protections.
Minors (children and teens) will be be told when they are interacting with an AI rather than a person, reducing deception and helping them make informed choices.
Minors who ask about suicide will receive crisis and hotline resources during the chat, increasing immediate access to help and potentially preventing harm.
Minors will face reduced exposure to harmful content (sexual material, gambling, drugs, alcohol, tobacco) because providers must limit such content and behaviors for underage users.
Developers and smaller providers will face compliance costs to implement age-appropriate disclosures, monitoring, and content controls, which could burden startups and raise prices.
Preemption of state laws could remove stronger local protections or enforcement mechanisms that currently protect minors, leaving some children with weaker safeguards.
FTC enforcement and potential civil penalties may prompt lawsuits or fines and could cause providers to scale back services or raise costs, reducing consumer access.
Based on analysis of 2 sections of legislative text.
Introduced December 5, 2025 by Erin Houchin · Last progress December 5, 2025
Prohibits chatbot providers from representing chatbots to minors as licensed professionals unless that is true, and requires clear, age‑appropriate disclosures that the system is an AI and — when a minor asks about suicide — crisis hotline resources. It also requires providers to adopt reasonable policies to (1) prompt minors to take a break after three uninterrupted hours of interaction and (2) address sexual material harmful to minors, gambling, and illegal drug/tobacco/alcohol content. Enforcement is through the Federal Trade Commission with state attorneys general given limited parens patriae authority, and the Department of Health and Human Services must fund and run a four‑year study on chatbots’ mental‑health risks and benefits for minors.