AI Whistleblower Protection Act
Introduced on May 15, 2025 by Jay Obernolte
Loading Map…
Introduced on May 15, 2025 by Jay Obernolte
This bill protects people at work who speak up about AI security problems or AI rule-breaking. Employers would not be allowed to fire, demote, suspend, threaten, blacklist, harass, or otherwise punish someone for reporting in good faith—whether they’re a regular employee or an independent contractor, and whether the punishment happens on the job or after they leave. Reports can be made to regulators, law enforcement, the Attorney General, Congress, or within the company to a boss or someone with authority to investigate or fix the issue. The bill defines an AI security vulnerability as a security gap that could let someone, including a foreign group, steal or obtain advanced AI technology.
If a worker faces retaliation, they can file a complaint with the Department of Labor. If there’s no final decision in 180 days and the delay isn’t the worker’s fault, they can take the case to federal court and ask for a jury trial . Deadlines apply: generally up to 6 years from the retaliation, or 3 years after the worker learns the key facts, with a 10-year outer limit. If they win, remedies include getting their job back with the same seniority, double back pay with interest, and payment of legal costs and fees. These rights can’t be signed away or forced into arbitration first.