News

U.S. National Institute of Standards and Technology issues draft guidance on AI safety and standards

Image
Image

The U.S. National Institute of Standards and Technology (NIST) is seeking comment on four draft publications meant to promote safe, secure, and trustworthy artificial intelligence (AI) systems. Two drafts provide guidance to help organizations manage the risks of generative AI and serve as companion resources to NIST’s AI Risk Management Framework (AI RMF) and Secure Software Development Framework (SSDF). A third draft offers methods for promoting transparency in AI-created content. The fourth proposes a plan for developing global AI standards. NIST also launched NIST GenAI, an evaluation platform to support testing, measurement, and evaluation of generative AI technologies. The draft publications and NIST GenAI are part of NIST’s response to President Biden’s October 2023 AI Executive Order, which directs NIST to help identify and mitigate AI risks while supporting responsible innovation and U.S. technological leadership. NIST is accepting comments on each draft item until June 2, 2024.

Mitigating Generative AI Risks

The draft AI RMF Generative AI Profile (NIST AI 600-1) is designed to help organizations identify unique risks posed by generative AI and potential risk management actions that align with their goals and priorities. Developed as a companion for NIST’s AI RMF, the AI Profile guidance document identifies 12 risks novel to or exacerbated by the use of generative AI and more than 400 possible actions that developers can take to manage them. The 12 risks include easier access to information related to chemical, biological, radiological, or nuclear weapons; a lowered barrier to entry for hacking, malware, phishing, and other cybersecurity attacks; and the production of hate speech and toxic, denigrating, or stereotyping content. Following the detailed descriptions of these 12 risks is a matrix of the 400 actions that developers can take to mitigate the risks.

Reducing Threats to the Data Used to Train AI Systems

The draft publication on Secure Software Development Practices for Generative AI and Dual-Use Foundation Models (NIST Special Publication (SP) 800-218A) is designed to be used alongside NIST’s SSDF (SP 800-218). This companion resource expands the SSDF’s guidance for securing software code to address concerns around malicious training data adversely affecting generative AI systems. It covers training and use of AI systems, as well as recommendations for dealing with training data and the data collection process.

Reducing Synthetic Content Risks

The draft item on Reducing Risks Posed by Synthetic Content (NIST AI 100-4) is intended to reduce risks associated with the rise of “synthetic” digital content (AI-created or -altered content) by promoting transparency. NIST AI 100-4 lays out methods for detecting, authenticating, and labeling synthetic content, such as digital watermarking and metadata recording. The techniques can be used to embed data identifying the origin or history of audiovisual content to help verify its authenticity. The draft also discusses preventing and reducing harms from synthetic child sexual abuse material and non-consensual intimate imagery.

Global Engagement on AI Standards

The draft Plan for Global Engagement on AI Standards (NIST AI 100-5) is designed to drive the worldwide development and implementation of AI-related consensus standards, cooperation and coordination, and information sharing, with international allies and partners, standards-developing organizations, and the private sector. The draft invites feedback on areas and topics that may be urgent for AI standardization, including mechanisms for enhancing awareness of the origin of digital content, whether authentic or synthetic; and shared practices for testing, evaluation, verification, and validation of AI systems. The draft also notes some topics may not be ripe for standardization because stakeholders still need to establish a foundational scientific understanding of the topic. NIST seeks comment in particular on how to prioritize the topics for standardization work and the activities and actions in the plan.

NIST GenAI

In addition to the four draft publications, NIST also announced NIST GenAI, a new program to evaluate and measure generative AI technologies that will help inform the work of the U.S. AI Safety Institute at NIST. The program will issue a series of challenge problems designed to evaluate and measure the capabilities and limitations of generative AI technologies, with a focus on identifying strategies to promote information integrity and guide the safe and responsible use of digital content. One of the program’s goals is to help people determine whether a human or an AI produced a given text, image, video, or audio recording. Registration opens in May 2024 for participation in the pilot evaluation, which will seek to understand how human-produced content differs from synthetic content.

Next Steps

NIST’s issuance of these draft publications is a significant step toward fulfilling its obligations under the AI Executive Order. While NIST guidance and resources are generally voluntary, they can establish best practices and industry-wide expectations, which may end up reflected in contract terms. Companies developing or deploying AI technologies should review the drafts for opportunities to provide feedback and help shape the scope and nature of the tools NIST is developing. NIST has identified specific areas for feedback, but companies can comment on any aspect of the drafts. NIST is accepting comments until June 2, 2024.

 

Authored by Katy Milner, Mark Brennan, Ryan Thompson, and Ambia Harper.

Search

Register now to receive personalized content and more!