Expands and clarifies required federal AI training by defining who must receive it, who runs the program, and what topics must be covered. Responsibility for the training program is assigned to the Administrator of GSA working with OMB; new statutory definitions (for roles like acquisition positions, data/technology positions, supervisors, and management officials) are added and existing training may be incorporated. Required training topics include: what AI is, capabilities and risks, benefits to the Federal Government, agency considerations for development/deployment/management, and the role of data.
Strike paragraph (4) from subsection (a) of Section 2 (removes that paragraph from the statute).
Redesignate existing paragraphs (1), (2), (3), and (5) as (3), (4), (5), and (7), respectively (renumbers paragraphs in subsection (a)).
Add definition: 'Acquisition position' means any position listed in subsection (g)(1)(A) of section 1703 of title 41, United States Code.
Add definition: 'Administrator' means the Administrator of General Services.
In the list of persons covered by the program, remove the prior subparagraph (A), redesignate (B) as (E), and insert new covered categories: (A) an employee of an executive agency serving in an acquisition position; (B) a management official; (C) a supervisor; (D) an employee serving in a data or technology position.
Who is affected and how:
Federal employees: The law directly affects many federal employees by expanding which job categories must receive AI training. Employees in acquisition, data, technology, supervisory, and management roles will likely be required to complete baseline AI training. That will increase training time obligations and require agencies to track compliance.
Federal agencies and leadership (GSA and OMB): GSA (led by its Administrator) becomes the program lead, working with OMB. Agencies will need to coordinate with GSA/OMB to adopt approved materials or demonstrate that existing training meets statutory requirements. Agencies may need to allocate staff time to map current programs to the new statutory baseline.
Program developers and training providers: Vendors and internal training teams may see demand for standardized federal AI training modules, accessible materials, and compliance tracking tools. Existing training products may be adapted for statutory alignment.
Operational impact: The required topics (risks, benefits, data quality, deployment considerations) aim to improve risk awareness and decision-making when federal employees acquire, develop, or manage AI systems. Over time, that could reduce misuse, improve procurement choices, and standardize governance practices across agencies.
Costs and implementation burden: While the statute sets content and assigns responsibility, it does not specify funding; agencies may absorb costs for staff time, curriculum adaptation, and recordkeeping. Allowing agencies to incorporate existing training mitigates duplication but still requires administrative work to validate alignment.
Privacy and effectiveness considerations: Agencies will need to ensure training materials are current with fast-moving AI developments and that training balances general awareness with role-specific technical depth. The law’s definitions make it easier to target appropriate depth for technical vs. managerial staff.
Overall, the change is focused and administrative: it raises baseline AI literacy expectations across the federal workforce, centralizes program authority, and clarifies covered employee categories, with modest implementation burden and no direct appropriation language in the provided text.
Last progress June 5, 2025 (8 months ago)
Introduced on June 5, 2025 by Nancy Mace
Referred to the House Committee on Oversight and Government Reform.