How enterprise risk managers can help federal agencies implement artificial intelligence guidance
Close
Back to Blog

How enterprise risk managers can help federal agencies implement artificial intelligence guidance

Date
April 25, 2024 | Updated on June 17, 2024
Authors
Anthony Vetrano, Elizabeth Byers, Cynthia Vitters, Eliza Clark, Mark Stofanak, Mackenzie Noe

Recently, the federal government has issued guidance, frameworks and an executive order tied to the management of artificial intelligence risks.  

Enterprise Risk Management, or ERM, supports the strategic and operational responsibilities of agency leaders in meeting these standards. In January, the Partnership for Public Service hosted an event with Deloitte & Touche LLP on this topic featuring two panelists: Kevin Walsh, the Government Accountability Office’s director of information technology and cybersecurity, and Martin Stanley, strategic technology branch chief at the Cybersecurity and Infrastructure Security Agency.

Cynthia Vitters from Deloitte, Kevin Walsh from the Government Accountability Office, and Martin Stanley from the Cybersecurity and Infrastructure Agency in a virtual panel discussion, “The Evolving Enterprise Risk Manager: Understanding the New AI Guidance.”

Walsh and Stanley examined recent federal AI guidance and requirements (including the AI executive order and NIST AI Risk Management Framework, discussed AI use cases, and shared innovative practices for ERM practitioners to support AI risk management efforts in collaboration with chief artificial intelligence officers and their teams.  

A recent GAO report, authored by Walsh, highlights the continued need for the effective and responsible implementation of AI across government. Released prior to the publication of OMB’s draft memorandum, the report notes that only 10 of the 23 agencies examined had implemented “key AI requirements,” while 12 had implemented some and one did not apply.  

An enterprise view of AI: Accelerating value 

Agency ERM programs are uniquely positioned to aid in the safe, secure and trustworthy development and use of AI technologies—coalescing AI risk management from an enterprise perspective. As agencies begin to frame initial AI risk management conversations, it’s critical to employ three practices:  

1. Employ a defined governance structure.

Governance enables an organization’s risk management program by building AI risk management processes and procedures into the lifecycle of an AI system. A defined governance structure with clear roles and responsibilities sets the tone for the organization’s risk culture and bolsters risk-informed decision-making.  

ERM offers existing risk governance structures that streamline accountability and transparency, and foster engagement, among key AI actors and oversight responsibilities. Stanley shared that “the govern function is the first thing that I recommend an organization adopts from the AI Risk Management Framework.” He recommended that agencies “pick out a few govern subfunctions that represent the needs of the organization and get going on building those capabilities into the ERM framework.” 

2. Develop a risk-based approach to AI use and implementation.

By integrating AI considerations into existing ERM programs, agencies can develop a more complete risk picture of AI to improve decision-making. The panelists unanimously determined that AI has the potential to drastically improve governmental operations but also comes with large-scale societal and national risks that need to be managed appropriately.  

Stanley noted that risk professionals need to do more than just create a list of internal controls and move on. “The intent [of the AI risk management framework],” he said, “is to build a capacity where the most important [and] the most concerning risks can be addressed, measured and managed.” Organizations should examine their risk profiles to understand how risks to and from AI could shift existing risk scenarios. 

3. Collaborate with necessary stakeholders.

It’s important to have a clear governance structure, complete with appropriate roles and delineated responsibilities, to effectively operationalize AI risk management. Collaboration may begin with the chief artificial intelligence officer and the chief risk officer but should extend across the enterprise, potentially including IT, strategy, talent and HR, cyber and procurement functions. Walsh agreed, noting that “a chief artificial intelligence officer ought to, if they don’t already, have a strong working relationship with the ERM team.” 

Read our previous blog post for a recap of our previous working session in this series.


This blog was co-authored by teams from the Partnership for Public Service (Elizabeth Byers and Anthony Vetrano) and Deloitte (Cynthia Vitters, Eliza Clark, Mark Stofanak, and Mackenzie Noe). 


Leave a Reply