Photo credit: Shutterstock
Technological tools such as artificial intelligence “always have potential benefits and risks,” according to Terrence Neumann, an academic studying AI at the University of Texas at Austin. Many experts have created frameworks that aim to provide guardrails to help organizations take advantage of AI’s capabilities while avoiding its potential risks. One tally counts over 160 published frameworks—from organizations as diverse as the American Medical Association and the New York Times—delineating how automated decision-making tools can be developed and implemented ethically.
These frameworks often center on the concept of responsible artificial intelligence: the idea that AI tools must meet certain governance and ethical standards in their development, implementation and operation. Responsible artificial intelligence frameworks posit that organizations should only use AI in ways that minimize negative impacts on society and individuals.
Various public sector organizations in the United States also have begun work on frameworks for responsible artificial intelligence use. For example, the U.S. Agency for International Development’s Artificial Intelligence Action Plan and the University of California’s Responsible Artificial Intelligence report lay out standards and recommendations for future action to promote responsible AI use. The Government Accountability Office’s 2021 AI Accountability Framework describes how key practices in the areas of governance, data, performance and monitoring can assist public sector organizations in ensuring responsible AI use. And the October 2022 Blueprint for an AI Bill of Rights released by the Biden administration outlines principles that seek to protect the public in their interactions with automated systems.
Although they highlight many of the same principles, each of these frameworks addresses specific considerations for how to achieve responsible artificial intelligence in a particular context—for example, in the medical or legal fields. The experts with whom we spoke outlined how several core principles of responsible AI apply to public service delivery. The principles below should serve as examples for leaders of how to adapt established tenets of responsible artificial intelligence to the specific context of public services.
A key facet of responsible AI is understanding when AI is or is not well-suited to address a specific problem. In their current form, AI tools are most well-suited for specific tasks that involve clear parameters—for example, identifying whether an X-ray shows a broken bone. Many public services, on the other hand, involve complex decisions “where there’s a lot more room for uncertainty…and where there’s a really significant cost to getting it wrong,” one public service expert said. AI might be the right solution for an agency looking to answer website visitors’ simple questions via a chatbot, but might not be an appropriate choice to evaluate the likelihood an applicant will commit benefits fraud. AI is often not the right tool when the consequences—such as denial of much-needed financial assistance—of an AI tool making an incorrect decision have significant potential to negatively impact customers, and when correct decision-making requires complex thinking and evaluation for which AI is not well suited.
The data used in, and outputs of, artificial intelligence models related to public service delivery should be representative of the eligible constituents of a particular public service. Particularly when certain constituent characteristics are underrepresented in data or form a small portion of a service’s customer base, public service leaders should ensure that the data underpinning AI tools accounts for everyone who might interact with a service. “If, for example, this service is serving a population that is underbanked, we know that many datasets out there have known gaps around this population,” said Taka Ariga, chief data scientist at the Government Accountability Office.
Without robust attention to representativeness, an AI model in this situation could fail to perform correctly and could even worsen service delivery. A model that relies on financial record data, for example, might interpret a lack of this history as an indicator of a customer’s ineligibility, denying them access to a service they qualify for because the data was not representative of the full range of potential customers.
Equal treatment under the law is a core principle of democratic public service delivery, but agencies must be aware of the particular challenges that AI tools can present to the principle of non-discrimination. AI does well at eliminating certain types of bias—for example, unlike humans, it’s not susceptible to making different decisions because it is tired or hungry, or because the applicant is a family member— but it also faithfully reproduces any biases present in the data it was trained on. For example, if participants in a farm loan program have historically been primarily from certain states, an algorithm trained on that historical data may draw incorrect conclusions about who the program’s customers should be and reproduce existing patterns, unintentionally leading to the exclusion of eligible customers. “Generally, AI faithfully learns from the training data it’s fed but doesn’t automatically highlight qualitative issues that could contribute to skewed results,” said the GAO’s Ariga, noting that agencies must be particularly vigilant in building and training AI models to prevent automating biased outcomes.
When using artificial intelligence tools for service delivery, governments must be transparent with the public about why and how these tools are being used. Making information easily available can help build trust between agencies and the people they serve, and this is all the more important when customers may not have much familiarity with how these tools operate. However, public service leaders should focus on providing transparency in a way that is meaningful for the public, rather than providing technical information that is likely to create more confusion. “Explaining the algorithm itself is likely not sufficient,” said Vince Dorie, principal data scientist at Code for America. Instead, governments can demonstrate transparency by developing and adhering to a set of publicly available criteria that an AI tool must fulfill before it can be used for service delivery. Effective transparency requires agencies to “as much as possible, explain how the algorithm was evaluated, so that people understand…this is the standard it met,” Dorie said.
DUE PROCESS AND OPPORTUNITY FOR REDRESS
Like traditional processes that provide opportunities for members of the public to appeal government decisions, public services using AI must provide customers with due process and opportunity for redress if they are negatively impacted by a decision made by or reliant on AI. For example, if an AI tool decides an applicant is likely a fraud risk and denies a benefit, “do they have the ability to see that result and its accompanying confidence metrics and contest that result?” asked Ariga. According to Dorie, due process in this context also requires that an AI-enabled service use the same standards—for example, the same requirements for eligibility—that would be used if the service operated without an AI tool.