National Artificial Intelligence Research and Development Strategic Plan 2023 Update

View Full Text: National-AI-RD-Strategy-2023.pdf

Artificial intelligence (AI)[1] is one of the most powerful technologies of our time. In order to seize the opportunities that AI presents, the nation must first work to manage its risks. The federal government plays a critical role in this effort, including through smart investments in research and development (R&D) that promote responsible innovation and advance solutions to the challenges that other sectors will not address on their own. This includes R&D to leverage AI to tackle large societal challenges and develop new approaches to mitigate AI risks. The federal government must place people and communities at the center by investing in responsible R&D that serves the public good, protects people’s rights and safety, and advances democratic values. This update to the National AI R&D Strategic Plan is a roadmap for driving progress toward that goal.

This plan defines the major research challenges in AI to coordinate and focus federal R&D investments. It will ensure continued U.S. leadership in the development and use of trustworthy AI systems, prepare the current and future U.S. workforce for the integration of AI systems across all sectors, and coordinate ongoing AI activities across all federal agencies.[i]

This plan, which follows national AI R&D strategic plans issued in 2016 and 2019, reaffirms eight strategies and adds a ninth to underscore a principled and coordinated approach to international collaboration in AI research:

Strategy 1: Make long-term investments in responsible AI research. Prioritize investments in the next generation of AI to drive responsible innovation that will serve the public good and enable the United States to remain a world leader in AI. This includes advancing foundational AI capabilities such as perception, representation, learning, and reasoning, as well as focused efforts to make AI easier to use and more reliable and to measure and manage risks associated with generative AI.

Strategy 2: Develop effective methods for human-AI collaboration. Increase understanding of how to create AI systems that effectively complement and augment human capabilities. Open research areas include the attributes and requirements of successful human-AI teams; methods to measure the efficiency, effectiveness, and performance of AI-teaming applications; and mitigating the risk of human misuse of AI-enabled applications that lead to harmful outcomes.

Strategy 3: Understand and address the ethical, legal, and societal implications of AI. Develop approaches to understand and mitigate the ethical, legal, and social risks posed by AI to ensure that AI systems reflect our nation’s values and promote equity. This includes interdisciplinary research to protect and support values through technical processes and design, as well as to advance areas such as AI explainability and privacy-preserving design and analysis. Efforts to develop metrics and frameworks for verifiable accountability, fairness, privacy, and bias are also essential.

Strategy 4: Ensure the safety and security of AI systems. Advance knowledge of how to design AI systems that are trustworthy, reliable, dependable, and safe. This includes research is needed to advance the ability to test, validate, and verify the functionality and accuracy of AI systems, and secure AI systems from cybersecurity and data vulnerabilities.

Strategy 5: Develop shared public datasets and environments for AI training and testing. Develop and enable access to high-quality datasets and environments, as well as to testing and training resources. A broader, more diverse community engaging with the best data and tools for conducting AI research increases the potential for more innovative and equitable results.

Strategy 6: Measure and evaluate AI systems through standards and benchmarks. Develop a broad spectrum of evaluative techniques for AI, including technical standards and benchmarks, informed by the Administration’s Blueprint for an AI Bill of Rights and AI Risk Management Framework (RMF).

Strategy 7: Better understand the national AI R&D workforce needs. Improve opportunities for R&D workforce development to strategically foster an AI-ready workforce in America. This includes R&D to improve understanding of the limits and possibilities of AI and AI-related work, and the education and fluency needed to effectively interact with AI systems.

Strategy 8: Expand public-private partnerships to accelerate advances in AI. Promote opportunities for sustained investment in responsible AI R&D and for transitioning advances into practical capabilities, in collaboration with academia, industry, international partners, and other non-federal entities.

Strategy 9: Establish a principled and coordinated approach to international collaboration in AI research. Prioritize international collaborations in AI R&D to address global challenges, such as environmental sustainability, healthcare, and manufacturing. Strategic international partnerships will help power responsible progress in AI R&D and the development and implementation of international guidelines and standards for AI.

The federal government plays a critical role in ensuring that technologies like AI are developed responsibly, and to serve the American people. Federal investments over many decades have facilitated many key discoveries in AI innovations that power industry and society today, and federally funded research has sustained progress in AI throughout the field’s evolution. Federal investments in basic and applied research[ii] have driven breakthroughs powered by emerging technologies like AI across the board, including in climate, agriculture, energy, public health, and healthcare. Strategic federal investments in responsible AI R&D will advance a comprehensive approach to AI-related risks and opportunities in support of the public good.


View Full Text: National-AI-RD-Strategy-2023.pdf


[1] There are multiple definitions of AI and AI systems used by the federal government, including from in the National Defense Authorization Act for Fiscal Year 2019 [Public Law 115-232, sec. 238(g)], the National Defense Authorization Act for Fiscal Year 2020 [Public Law 116-617, sec. 5002(3)], and the NIST AI Risk Management Framework, as well as a more encompassing view of automated systems articulated in the Blueprint for An AI Bill of Rights. The R&D priorities defined in this document are applicable and important to the full breadth of technologies covered by these definitions.

[i]     NAIIO. (n.d.). National Artificial Intelligence Initiative. NAIIO.

[ii]    The Federal Government supports 41 percent of basic research funding in the United States. Burke, A., Okrent, A. & Hale, K. (2022, January 18). The State of U.S. Science and Engineering 2022. The National Science Board. Throughout this document, basic research includes both pure basic research and use-inspired basic research—the so-called Pasteur’s Quadrant defined by Donald Stokes in his 1997 book of the same name—referring to basic research that has been used for society in mind. For example, the fundamental NIH investments in IT are often called use-inspired basic research.