Loading Map…
Introduced on June 11, 2025 by Darin Lahood
This bill tells the National Security Agency to create an “AI Security Playbook” to protect advanced AI from theft by powerful foreign or other well‑resourced actors. The playbook must map out weak spots in advanced AI data centers and among top AI developers, explain which parts would be most harmful if stolen (like model weights and core insights), and set ways to detect, stop, and respond to cyber threats. It also looks at when security needs might be so high that the U.S. government would need to be heavily involved, including a what‑if plan for building highly secure government‑run AI systems.
The playbook will include both detailed methods (some may be classified) and a public, unclassified guide with best practices that people and companies can use. The agency must talk with leading AI developers and researchers, work with a federal research center, and report progress to Congress within 90 days and a final report within 270 days, with public versions available. This bill does not create new rules or enforcement powers. It focuses on advanced AI that would pose a grave national security risk if stolen, such as systems that could match or beat experts in areas like cyber offense or chemical, biological, radiological, and nuclear topics; “threat actors” means nation‑states or other highly resourced groups .
Key points