President Biden has signed an executive order (EO) aimed at regulating generative AI systems, recognizing their transformative potential and potential risks. The order focuses on ensuring the safe and responsible use of AI, including with respect to critical infrastructure.
To help ensure safe and reliable AI, the EO directs the Secretary of Commerce to require a series of measures by companies developing or demonstrating an intent to develop potential "dual-use foundation models." This refers to an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters. Additionally, in a "Managing AI in Critical Infrastructure and in Cybersecurity" section, the EO details a series of other actions by federal departments and agencies, including the Sector Risk Management Agencies (SRMAs). They include the SRMAs, in coordination with the Cybersecurity and Infrastructure Security Agency (CISA), evaluating the potential risks related to the use of AI in critical infrastructure sectors, including ways in which deploying AI may make critical infrastructure systems more vulnerable to critical failures, physical attacks, and cyber attacks, and shall consider ways to mitigate these vulnerabilities.
NIST has made an announcement that the organization will collaborate with both private and public stakeholders to fulfill its responsibilities outlined in the executive order. They state, "We are dedicated to creating valuable assessment criteria, testing environments, and informational assets to assist organizations in the development, deployment, and utilization of AI technologies that prioritize safety, security, and bolster AI trustworthiness."