Hogan Lovells 2024 Election Impact and Congressional Outlook Report
The U.S. Office of Management and Budget (OMB) recently published a final memorandum (AI Memo) adopting requirements on federal agencies’ oversight, procurement, and use of artificial intelligence. The AI Memo requires agencies to designate a Chief AI Officer (CAIO) and describes the CAIO’s roles and responsibilities. It also directs agencies to increase their capacity for responsible AI adoption, including support for sharing and re-using AI models, code, and data. In addition, the AI Memo requires agencies to follow minimum risk management practices when using safety-impacting AI and rights-impacting AI and lists specific categories of AI that are presumed to impact rights or safety.
The AI Memo reflects comments received on OMB’s draft memorandum released last November in response to President Biden’s October 2023 AI Executive Order. While the AI Memo speaks directly to federal agencies and their vendors, all organizations can glean useful insights into the Biden Administration’s priorities and approach to AI regulation and governance.
By May 27, 2024, most federal agencies must appoint a CAIO to coordinate AI use across the agency, promote innovation, and manage AI risks. In addition, OMB directs some agencies (mainly executive departments) to establish by the same date AI Governance Boards composed of senior officials to oversee agency AI use.
Federal agencies must also prepare and submit to OMB a compliance plan within 180 days of the AI Memo’s issuance (September 24, 2024), any subsequent update to the AI Memo, and then biannually until 2036. Alternatively, an agency may submit a written determination that it does not use and does not anticipate using covered AI.
With few exceptions, each agency must inventory at least annually all of its AI use cases, submit the inventory to OMB, and publish a public version on its website. Under newly expanded requirements, AI inventories must identify whether each AI use case is safety-impacting or rights-impacting, as well as provide details about potential risks and plans to manage those risks. The AI Memo provides a non-exhaustive list of categories of AI that are presumed to be safety- or rights-impacting. When AI use cases cannot be individually inventoried for legal or policy reasons, agencies must annually report and release aggregate metrics on safety and rights impacts and risk management strategies.
Within one year, certain federal agencies (mainly executive branch agencies) must develop and publish on their websites an enterprise strategy for advancing responsible AI use and deepening AI maturity. The strategy must address current and planned uses for AI, governance, enterprise capacity for AI innovation, workforce capacity, and plans for future AI investment.
OMB encourages federal agencies to identify and remove barriers to innovation and foster internal environments that provide sufficient flexibility and resources to teams developing and deploying AI. The AI Memo provides recommendations related to IT infrastructure, including computing resources and software tools, data access and governance, cybersecurity systems, and appropriate safeguards for the use of generative AI.
OMB encourages federal agencies to prioritize hiring and developing talent in AI and AI-enabling roles to increase enterprise capacity for responsible AI innovation.
OMB directs federal agencies to actively share their custom-developed code, including models and model weights, for AI applications in active use, prioritizing code that has the greatest potential for re-use. Agencies must share that code as open-source software on a public repository, subject to applicable law, government policy, and certain other considerations (including contractual obligations and identifiable risks to agency mission, programs, or operations, or to the stability, security, or integrity of an agency’s systems or personnel). When full sharing is impossible for safety or other reasons, OMB encourages agencies to share as much of the data, code, or model as is practicable. In addition, agencies should procure custom-developed code, training and testing data, and other AI resources in a manner that allows for sharing and public release of the materials.
The AI Memo also establishes that data used to develop and test AI likely constitute a “data asset,” and thus should be released publicly as open government data assets in some cases. OMB notes that agencies should promote data interoperability when sharing AI Data assets by coordinating with other agencies and using standardized data formats where feasible and appropriate.
OMB, in coordination with the White House Office of Science and Technology Policy, will establish an interagency council to harmonize the development and use of AI – including policies, best practices, exemplars, and resource sharing – across government.
The AI Memo describes minimum practices that agencies must implement by December 1, 2024, when using safety-impacting or rights-impacting AI. These terms are defined expansively. The minimum practices include:
|
|
OMB encourages federal agencies to adopt stronger practices as appropriate. The minimum practices may be waived for a specific AI product or component by CAIOs.
The AI Memo also provides agencies with recommendations for responsible AI procurement. OMB recommends measures for transparency, such as obtaining adequate documentation to assess the AI model’s capabilities and limitations and the provenance of data used to train or operate the AI. The AI Memo also directs agencies to encourage competition by not entrenching incumbents, promoting interoperability, and discouraging vendors from inappropriately favoring their own products. Agencies should also maximize government rights to data and any improvements in contract provisions and protect government data from unauthorized disclosure and use. The AI Memo encourages agencies to include additional risk management requirements in contracts for biometric-based AI, generative AI, and particularly for dual-use foundation models, including requiring adequate testing and safeguards, and the results of internal or external testing, to include red-teaming.
While the AI memo imposes requirements on agencies, it provides useful insights for the private sector regarding the Administration’s approach to AI governance, adoption, and risk management. Private sector entities interested in contracting with the federal government for AI technologies will want to be familiar with these requirements and policies. And other organizations may also want to review their AI governance and use practices in light of the standards and requirements outlined in the AI Memo.
Concurrent with the AI Memo’s release, OMB published a Request for Information on AI procurement that will help OMB further develop procurement standards. Comments are due April 29, 2024.
If you have questions about private sector implications, our global AI team is available to help you consider how insights from OMB’s AI Memo may apply to your business.
Authored by Katy Milner, Mark Brennan, Ryan Thompson, and Ambia Harper.