CISA collaborated with Sector Risk Management Agencies (SRMAs) and regulatory agencies to conduct sector-specific assessments of AI risks to U.S. critical infrastructure, as mandated by Executive Order 14110 Section 4.3(a)(i). The analysis categorized AI risks into three categories:
AI risk management for critical infrastructure is an ongoing process throughout the AI lifecycle.
These guidelines integrate the AI Risk Management Framework into enterprise risk management programs for critical infrastructure. The AI RMF Core consists of the Govern, Map, Measure, and Manage functions.
The Govern function within the AI RMF establishes an organizational approach to AI Risk Management within existing Enterprise Risk Management (ERM). Recommended actions for addressing risks throughout the AI lifecycle are integrated into the Map, Measure, and Manage functions. These guidelines improve AI safety and security risk management practices proposed by the NIST AI RMF.
CISA highlights that the risks are context-dependent, this implies that critical infrastructure operators should consider sector-specific and context-specific factors when assessing and mitigating AI risks. Specific sectors may need to define their own tailored guidelines for managing AI risk. Stakeholders may focus on different aspects of the AI lifecycle depending on their sector or role, whether they are involved in the design, development, procurement, deployment, operation, management, maintenance, or retirement of AI systems.
“Critical infrastructure owners and operators can foster a culture of risk management by aligning AI safety and security priorities with their own organizational principles and strategic priorities. This organizational approach follows a “secure by design” philosophy where leaders prioritize and take ownership of safety and security outcomes and build organizational structures that make security a top priority.” read the guidelines.
Follow me on Twitter: @securityaffairs and Facebook and Mastodon
(SecurityAffairs – hacking, CISA)