Perspectives on the ethical use of artificial intelligence in government
Back to Blog

Perspectives on the ethical use of artificial intelligence in government

April 2, 2024 | Updated on June 3, 2024

The Partnership for Public Service, in collaboration with the IBM Center for the Business of Government, convened federal service delivery leaders for a roundtable conversation on ethical artificial intelligence and benefits delivery.  

During the conversation, we heard from a range of leaders on how they are approaching the possibilities and challenges AI presents for government. Here are the main takeaways from their discussion. 

AI is an impact multiplier 

AI’s most immediate government use cases are a matter of scale and speed. It can make big tasks smaller and onerous ones simple. However, just like the humans who make and use it, AI’s impact can be negative and its unintended consequences harmful. These harms are multiplied and scaled similarly to its benefits.

  • Code generation for IT modernization.  
  • Automation of rote tasks.  
  • Focusing staff time on customers, not forms. 
  • Entrenching systemic bias.  
  • Output inaccuracies.  
  • Performance degradation. 

The federal government has the opportunity to thoughtfully and decisively address questions of fairness, explainability, security, transparency and privacy.  

To do so, leaders promoted establishing and empowering internal AI ethics bodies to develop and drive risk mitigation strategies. Government can call on lessons learned from managing disruptive technologies over the last century to accelerate governance strategy development. To translate culture and strategy into responsible operationalization, leaders proposed consolidating the technical and personnel elements needed to develop and maintain AI systems under agency authority.  

Agencies that enact these recommendations will be able to develop more robust, thoughtful and adaptable governance structures. Standing in the way are constraints on workforce development, institutional knowledge needs and the capital required to execute.  

AI-ready government 

A well-trained and empowered federal workforce is critical to an effective government, and AI implementation is no exception. Agency leaders identified organizational culture, internal knowledge gaps, staff engagement and restrictive hiring authorities as key barriers to the ethical implementation of AI for service delivery. To overcome these challenges, they propose that agencies: 

  • Train leaders on AI’s technical and ethical implications so they can direct responsible implementation with confidence and clarity. 
  • Train current staff as AI implementation partners. Leveraging their institutional, domain and program expertise will save jobs, improve outcomes, mitigate harms and ease the transition for new technical hires. 
  • Engage upskilled staff to develop a mission-driven organizational culture aligned with transparent and ethical AI implementation. 
  • Organize cross-functional hiring teams with technical, HR and policy knowledge to attract and retain technical talent.  
  • Loosen hiring restrictions for technical talent to compete with private firms. 

In each of these cases, collaboration is critical. AI is an all-of-government opportunity that can only be ethically and impactfully achieved by leveraging the full diversity of skills and experiences available to the federal government.  


Leaders underscored that they lack the money and mandate necessary to achieve responsible AI goals.  

While President Biden’s AI executive order calls for adopting critical workforce measures and sets admirable ethical implementation goals, it does not provide the funding needed to act. The order alone will not unify AI’s fragmented policy and legal landscape, which hampers agencies that are ready to move forward.  

To achieve their AI goals, agencies need Congress to appropriate long-term, predictable funding and lay out clear frameworks for use.  

An urgent call to action 

As participants discussed these barriers, they emphasized the level of urgency that surrounds operationalizing AI.  

Moving forward without resolving these foundational issues limits our government’s capacity to consider deeper concerns regarding responsible delivery. Solving them means having the talent and time to address those concerns.  

Government can’t move fast and break things. It must instead hold itself and its partners accountable for the responsible implementation of AI that improves services and protects the public. 

Read more from the IBM Center for the Business of Government at Artificial Intelligence Development for Government Agencies: Strategies for Ethical and Effective Implementation.

The 2024 Responsible AI Federal Roundtable Series is being offered as an opportunity for federal leaders to convene on timely topics in AI. The series is currently supported by and delivered in collaboration with IBM Center for The Business of Government, Boston Consulting Group and Deloitte. See other blogs in the series: Lessons on Equitable and Effective AI for Public Health.

Leave a Reply