Back to basics: Navigating generative AI risk for public service delivery
Close
Back to Blog

Back to basics: Navigating generative AI risk for public service delivery

Date
August 14, 2023 | Updated on January 8, 2024
Authors
Meg Shelburne

Government agencies at the federal, state and local levels have an obligation to provide cost-effective, trustworthy and equitable public services to all possible customers. Artificial intelligence tools have the potential to help agencies meet this responsibility by enhancing the efficiency and ease of public service delivery.  

However, as laid out in the Partnership’s recent report, “In the Public AI,” artificial intelligence tools also pose significant risks to these efforts: When deployed irresponsibly, they can perpetuate bias, inaccuracy and opaque decision-making.  

Since the publication of “In the Public AI,” the introduction of unprecedentedly powerful generative AI tools has made this tradeoff between AI’s promises and risks even more important for public servants to understand. In this period of uncertainty, the responsible-use principles derived from previous generations of AI can help ground and guide conversations around the possibilities of generative AI use.  

What is generative AI? How could it impact public service delivery?  

 Generative AI as a category is a branch of deep-learning artificial intelligence technology that generates new text, images or other content by replicating the patterns and structures of the data it is trained on.   

 Generative AI could bring unprecedented efficiency to public service delivery by assuming or assisting in tasks performed by government workers. However, it is prone to the errors common to existing AI systems. Given generative AI’s widespread availability and convincing outputs, these tools are quite risky to use without proper understanding.  

Applying established best practices  

Our report’s Guiding Questions for Nontechnical Government Leaders distills the lessons learned from earlier AI tools into a set of key considerations that public servants can turn to at each step of the process. As with earlier AI tools, agency leaders considering the use of a generative AI tool must fully understand the tradeoffs and be able to articulate why generative AI is the right fit for the job. Previous iterations of AI deployment have revealed important questions to pay attention to at each step of procuring, implementing, and evaluating AI tools in order to mitigate their potential negative impacts.  

Likewise, the set of foundational best practices for AI deployment outlined in our Building Blocks for Responsible AI Use framework are especially relevant for generative AI.  

These practices include maintaining high-quality data, understanding biases in the data, recruiting an experienced workforce, and building agile government structures that enable collaboration and educated decision-making. While important for every aspect of an agency’s work, these actions are crucial in the context of AI and, in particular, generative AI implementation.   

To learn how previous iterations of AI in public service delivery can be applied to navigate the current wave of generative AI, read “In the Public AI. 

Meg Shelburne is an intern in the Partnership’s research, evaluation and modernizing government team.

Leave a Reply