What the new federal guidance on artificial intelligence means for agencies
Close
Back to Blog

What the new federal guidance on artificial intelligence means for agencies

Date
August 15, 2019 | Updated on October 21, 2020
Authors

To ensure the responsible use of artificial intelligence in the U.S., the National Institute of Standards and Technology this week released a plan that could help our government catch up with other countries and international organizations on setting standards for trustworthy AI.

The agency acted in response to a February 2019 executive order that called on NIST to develop a plan for the development of “technical standards and related tools in support of reliable, robust, and trustworthy systems that use AI technologies.”

In the past few years, AI research, development and user communities have endorsed the responsible use of AI to ensure that people can trust the results the machine churns out and that what it produces does not lead to adverse effects. Requirements on safety, security, reliability, accuracy, explainability and transparency feature prominently in most plans.

Several federal agencies have been grappling with the need for AI standards. The federal plan released this week puts our government firmly in the corner of responsible AI use along with Canada, the European Union, the Organization for Economic Cooperation and Development, and other countries and organizations that have promoted AI guidelines or standards or called for them to be developed.

The U.S. plan does not promote specific guidelines for federal agencies to follow; rather, it recommends steps agencies should take when developing those guidelines on their own.

NIST’s plan encourages agencies to work with one another, the private sector and international partners when developing AI guidelines, and it emphasizes the importance of trustworthiness as the key way to “give [the American public] confidence in AI technologies, and cultivate trust in the design, development, and use of AI.”

The NIST plan highlights AI use for automated vehicles, medical devices and national security being explored by the Transportation Department, the Food and Drug Administration, the Homeland Security Department and the Office of the Director of National Intelligence. These agencies are working to ensure AI tools developed by the private sector, such as self-driving cars, are safe to use—by recommending guidelines addressing safety and reliability, or by calling for guidelines around the security and integrity of the technology so malicious actors cannot exploit them.

In researching the issue for several papers on AI, the Partnership found many agency officials seek to ensure their AI use is reliable and unbiased. As the government works on AI standards, it could benefit from the findings in “More Than Meets AI Part II: Building Trust, Managing Risk.” This paper, highlighting ways federal agencies can manage risks and use AI responsibly, was released recently by the Partnership together with the IBM Center for The Business of Government.