Photo credit: Shutterstock
Artificial intelligence is increasingly part of our lives—voice assistants on our smartphones, chatbots on retail websites and algorithms that suggest the next television show we should watch. As AI becomes more common in our everyday interactions with private sector entities, it is also increasingly relevant for the delivery of public services by federal, state and local governments. Some government agencies have already incorporated artificial intelligence tools into their services—such as Federal Student Aid’s Aidan chatbot or Utah’s efforts to prevent pandemic unemployment fraud using AI 1. Others, however, are just beginning to explore whether and how AI tools can be incorporated into their service delivery.
The scale and speed of artificial intelligence tools give them enormous potential to enhance the efficiency of government service delivery, but also mean these tools must be employed carefully to avoid automating biased or inaccurate results. This is particularly important in the context of public service delivery, where governmental organizations have an obligation to provide trustworthy and equitable services to all possible customers.3 While a wide body of research exists on the potential risks of governmental use of AI in law enforcement and national security contexts,4 less work has been done to examine what is needed for public sector organizations to responsibly use artificial intelligence in service delivery.
The existing resources on standards for responsibly using artificial intelligence often focus on technical and data specifications.5 These are fundamental considerations, but such recommendations are often difficult to understand for those without deep technical knowledge. And while technical experts play an integral role in deciding whether and how to employ artificial intelligence, many more of the government leaders who contribute to this process—program managers, acquisition professionals, lawyers and frontline service providers—lack technical backgrounds.
This research brief from the Partnership for Public Service and Microsoft examines how principles of responsible artificial intelligence can apply to government service delivery and offers recommendations and considerations that non-technical government leaders should take into account as they decide whether and how to incorporate AI tools into their services. It also outlines recommendations for facilitating collaboration between technical and non-technical leaders, as both sets of perspectives are vital to ensuring responsible use of artificial intelligence.
Technological tools such as artificial intelligence “always have potential benefits and risks,” according to Terrence Neumann, an academic studying AI at the University of Texas at Austin. Many experts have created frameworks that aim to provide guardrails to help organizations take advantage of AI’s capabilities while avoiding its potential risks. One tally counts over 160 published frameworks—from organizations as diverse as the American Medical Association and the New York Times—delineating how automated decision-making tools can be developed and implemented ethically.
These frameworks often center on the concept of responsible artificial intelligence: the idea that AI tools must meet certain governance and ethical standards in their development, implementation and operation. Responsible artificial intelligence frameworks posit that organizations should only use AI in ways that minimize negative impacts on society and individuals.
Various public sector organizations in the United States also have begun work on frameworks for responsible artificial intelligence use. For example, the U.S. Agency for International Development’s Artificial Intelligence Action Plan and the University of California’s Responsible Artificial Intelligence report lay out standards and recommendations for future action to promote responsible AI use. The Government Accountability Office’s 2021 AI Accountability Framework describes how key practices in the areas of governance, data, performance and monitoring can assist public sector organizations in ensuring responsible AI use. And the October 2022 Blueprint for an AI Bill of Rights released by the Biden administration outlines principles that seek to protect the public in their interactions with automated systems.
Although they highlight many of the same principles, each of these frameworks addresses specific considerations for how to achieve responsible artificial intelligence in a particular context—for example, in the medical or legal fields. The experts with whom we spoke outlined how several core principles of responsible AI apply to public service delivery. The principles below should serve as examples for leaders of how to adapt established tenets of responsible artificial intelligence to the specific context of public services.
A key facet of responsible AI is understanding when AI is or is not well-suited to address a specific problem. In their current form, AI tools are most well-suited for specific tasks that involve clear parameters—for example, identifying whether an X-ray shows a broken bone. Many public services, on the other hand, involve complex decisions “where there’s a lot more room for uncertainty…and where there’s a really significant cost to getting it wrong,” one public service expert said. AI might be the right solution for an agency looking to answer website visitors’ simple questions via a chatbot, but might not be an appropriate choice to evaluate the likelihood an applicant will commit benefits fraud. AI is often not the right tool when the consequences—such as denial of much-needed financial assistance—of an AI tool making an incorrect decision have significant potential to negatively impact customers, and when correct decision-making requires complex thinking and evaluation for which AI is not well suited.
The data used in, and outputs of, artificial intelligence models related to public service delivery should be representative of the eligible constituents of a particular public service. Particularly when certain constituent characteristics are underrepresented in data or form a small portion of a service’s customer base, public service leaders should ensure that the data underpinning AI tools accounts for everyone who might interact with a service. “If, for example, this service is serving a population that is underbanked, we know that many datasets out there have known gaps around this population,” said Taka Ariga, chief data scientist at the Government Accountability Office.
Without robust attention to representativeness, an AI model in this situation could fail to perform correctly and could even worsen service delivery. A model that relies on financial record data, for example, might interpret a lack of this history as an indicator of a customer’s ineligibility, denying them access to a service they qualify for because the data was not representative of the full range of potential customers.
Equal treatment under the law is a core principle of democratic public service delivery, but agencies must be aware of the particular challenges that AI tools can present to the principle of non-discrimination. AI does well at eliminating certain types of bias—for example, unlike humans, it’s not susceptible to making different decisions because it is tired or hungry, or because the applicant is a family member— but it also faithfully reproduces any biases present in the data it was trained on. For example, if participants in a farm loan program have historically been primarily from certain states, an algorithm trained on that historical data may draw incorrect conclusions about who the program’s customers should be and reproduce existing patterns, unintentionally leading to the exclusion of eligible customers. “Generally, AI faithfully learns from the training data it’s fed but doesn’t automatically highlight qualitative issues that could contribute to skewed results,” said the GAO’s Ariga, noting that agencies must be particularly vigilant in building and training AI models to prevent automating biased outcomes.
When using artificial intelligence tools for service delivery, governments must be transparent with the public about why and how these tools are being used. Making information easily available can help build trust between agencies and the people they serve, and this is all the more important when customers may not have much familiarity with how these tools operate. However, public service leaders should focus on providing transparency in a way that is meaningful for the public, rather than providing technical information that is likely to create more confusion. “Explaining the algorithm itself is likely not sufficient,” said Vince Dorie, principal data scientist at Code for America. Instead, governments can demonstrate transparency by developing and adhering to a set of publicly available criteria that an AI tool must fulfill before it can be used for service delivery. Effective transparency requires agencies to “as much as possible, explain how the algorithm was evaluated, so that people understand…this is the standard it met,” Dorie said.
Like traditional processes that provide opportunities for members of the public to appeal government decisions, public services using AI must provide customers with due process and opportunity for redress if they are negatively impacted by a decision made by or reliant on AI. For example, if an AI tool decides an applicant is likely a fraud risk and denies a benefit, “do they have the ability to see that result and its accompanying confidence metrics and contest that result?” asked Ariga. According to Dorie, due process in this context also requires that an AI-enabled service use the same standards—for example, the same requirements for eligibility—that would be used if the service operated without an AI tool.
The standards above are among the many principles that can guide public sector leaders in ensuring their use of artificial intelligence in service delivery is responsible and contributing to the public good. But how should non-technical leaders apply these principles in their decision-making around AI?
Ensuring responsible AI use is not a one-time exercise, but a continual process that requires attention every step of the way—from the first contemplation of incorporating AI into a program to the routine use of a fully-implemented tool. Below are some of the specific considerations and questions non-technical leaders should address at each stage of the process.
Responsibly evaluating, implementing and using artificial intelligence tools requires successful collaboration between technical and non-technical leaders. The data scientists building AI tools, the chief information officers operating them, the general counsels reviewing their privacy implications, the program managers interpreting their results and many others all need to be collaborating for AI use to be follow responsible artificial intelligence principles. The experts we interviewed highlighted key recommendations for how technical and non-technical leaders can productively collaborate to ensure responsible AI use:
Focus on the problem and intended outcomes. Focusing on the common reference point of the problem an AI tool is intended to solve can help technical and non-technical leaders communicate more effectively. Leaders can collaborate better when they focus on ensuring the tool is achieving intended outcomes rather than getting caught up in technical specifications or program management frameworks. According to Neumann of the University of Texas, “really thinking about the quality of the outcomes” is the key to more effective communication and collaboration.
Build a common foundation. Technical and non-technical leaders each bring important expertise to conversations around responsible AI, but this expertise is sometimes difficult to communicate across different frames of reference. Developing a baseline understanding of artificial intelligence and how it operates can prepare non-technical leaders for collaboration around responsible AI, while technical experts can benefit from learning more about the service being delivered and the service population’s needs and concerns. Ensuring that everyone has a common understanding of the technical and non-technical foundations can help leaders better understand each other and more productively collaborate.
Think about AI in context. “It’s often a very disconnected conversation” between the development of an AI tool and its implementation in the context of a program, noted Ariga of the GAO. Technical and non-technical leaders can improve their coordination by recognizing from the beginning that AI tools do not operate independently, but rather as part of a larger context. Considering questions such as “how will frontline employees interpret model outputs” and “what are the privacy implications of using this system” will help technical and non-technical leaders find concrete points of collaboration and ensure the AI tool is well-integrated into the broader system.
Recognize when AI is not the answer. Successfully collaborating on responsible AI requires technical and non-technical leaders to not only work together to implement AI tools, but also to mutually recognize when AI is not an appropriate solution. Program managers may have their heart set on improving their service delivery by incorporating an AI tool that has worked well in other contexts, or data scientists may be eager to implement an innovative prototype, but each group must heed concerns from the other when they arise. Open discussions and a commitment to responsible AI principles can help leaders build understanding about why colleagues may believe AI is not appropriate in a certain context and come to agreement about when to pursue and when to abandon artificial intelligence solutions.
Different agencies and levels of government have widely varying experience with using artificial intelligence in public service delivery. Some have already begun deploying AI tools in their services, while others have not so far adopted AI. Whatever their level of experience, public sector organizations must put responsible AI principles at the center of their decision-making. But to successfully apply these principles, agencies need to have in place the building blocks that create an environment that fosters responsible AI use: data, talent and governance structures.
High-quality data is fundamental to successful and responsible artificial intelligence tools. Organizations that have a rigorous, wholistic approach to cleaning and storing data will be better positioned to responsibly use artificial intelligence. However, some organizations are addressing questions of AI and data quality separately rather than as intertwined considerations. “I am just beginning to see that connection [between data quality and readiness for AI] happen in a meaningful operational way in state and local governments,” said Takai of the Center for Digital Government. Robust data quality is an important consideration regardless of an agency’s intent to use AI, but agencies should take particular care to have this foundation in place if they are interested in using artificial intelligence tools to deliver public services. An explicit connection to AI readiness can also help drive data quality initiatives if data is recognized as a necessary precursor to many potential uses of AI.
Talent is also a crucial building block for responsible AI use. Whether agencies are building their own AI systems or acquiring them from outside vendors, they should ensure they have sufficient expertise to evaluate and operate artificial intelligence tools. Public sector organizations should explore ways to develop technical and non-technical staff capacity to understand the risks, benefits and implications of using AI for service delivery. Some current efforts recognize this need and aim to assist agencies in developing expertise—the AI Training Act signed into law in October 2022 charges the Office of Personnel Management with developing a training program to help acquisition professionals better understand artificial intelligence and its potential risks and benefits.
Responsible use of artificial intelligence for public service delivery also requires strong governance structures that facilitate collaboration and agility. Before beginning AI initiatives, organizations should be sure they have in place processes that enable collaborative decision-making that takes into account the many perspectives needed for truly responsible AI use. “It’s important to say, does your organization have the governance structure to methodically bring these perspectives in throughout the [AI] lifecycle, and do they have enough authority in the matter?” said the GAO’s Ariga. Agencies also should consider how to establish governance processes that facilitate agility, so that they can adapt as circumstances change and continue to adhere to responsible AI principles.
Public sector organizations interested in using AI for service delivery can enhance their ability to deliver responsible artificial intelligence principles such as non-discrimination and transparency through collaboration between technical and non-technical leaders and a focus on establishing strong data, talent and governance foundations.
Elizabeth Byers contributes to the Partnership’s portfolio of government effectiveness research, in particular projects on improving the customer experience with federal services. The daughter and granddaughter of public servants, she grew up with a deep respect for federal workers and their dedication to working on behalf of the public. Elizabeth’s favorite public servant is Carla Hayden, the first woman to be appointed Librarian of Congress and a strong advocate for open and equal access to public services.
Taka Ariga
Chief Data Scientist and Director of Innovation Lab
Government Accountability Office
Nikhil Deshpande
Chief Digital Officer
Georgia Technology Authority
Vincent Dorie, Ph.D.
Principal Data Scientist
Code for America
Laurel Eckhouse
Quantitative Criminal Justice Researcher
Terrence Neumann
Ph.D. Candidate
University of Texas at Austin
Steve Nichols
Former Chief Technology Officer
Teri Takai
Senior Vice President
Center for Digital Government
Header photo credit: Shutterstock