The bill strengthens AI and supply-chain security with NSA guidance but does so at the expense of imposing new compliance costs and operational burdens on private companies.
AI developers and service providers receive actionable NSA security guidance to better protect AI models and supply chains from theft, sabotage, and misuse.
Private companies, especially smaller AI firms, may incur new compliance costs and engineering burdens to implement NSA-recommended protections, increasing operational expenses and possibly slowing deployment.
Based on analysis of 2 sections of legislative text.
The NSA must develop and share unclassified (and optional classified) guidance identifying AI and supply-chain vulnerabilities and recommend defenses, with reports to intelligence committees in 180 and 365 days.
Introduced November 19, 2025 by Todd Young · Last progress November 19, 2025
Requires the Director of the National Security Agency, via the agency's Artificial Intelligence Security Center (or successor office), to create and share security guidance that identifies vulnerabilities in covered AI technologies and the AI supply chain and recommends defenses against cyberattacks, theft, sabotage, insider threats, and foreign-threat-actor activities. The Director must consult relevant federal agencies, researchers, labs, and industry and deliver an initial progress report to the congressional intelligence committees within 180 days and a final report (with public/unclassified and optional classified versions) within 365 days. The guidance must cover AI-specific threat vectors and supply-chain weaknesses and provide both unclassified (with optional classified annex) and classified materials for service providers, including recommended strategies such as protecting model weights, insider-threat mitigation, network access controls, counterintelligence measures, and recovery plans.