2024-2025 Global AI Trends Guide
The U.S. Food and Drug Administration (FDA) recently published its long-awaited draft guidance on considerations for the use of artificial intelligence (AI) to support regulatory decision-making for drug and biologics, which provides a comprehensive, risk-based framework to help sponsors evaluate and manage AI models. The guidance applies to the nonclinical, clinical, postmarketing, and manufacturing phases of the product lifecycle; while notably excluding from its scope AI use for drug discovery and operational efficiencies that do not impact patient safety, drug quality, or the reliability of results from a nonclinical or clinical study. By implementing the recommended credibility assessment plans and submitting credibility assessment reports as described by the guidance, drug and biologic sponsors can establish robust documentation that validates their AI models for FDA review. The guidance also underscores the importance of continuous oversight of AI through a lifecycle maintenance plan, to ensure that models remain reliable and compliant as new data, technologies, and policies emerge.
FDA invites comments on the guidance through April 7.
Key regulatory expectations in the draft guidance include:
The guidance introduces a seven-step, risk-based framework for assessing the credibility of AI model outputs, tailored to the model’s COU. This process includes defining the question and COU, assessing model risk, and developing and executing a plan to establish credibility.
AI models in drug development require ongoing monitoring and adjustments to maintain their credibility throughout their life cycle, especially during the manufacturing phase. FDA recommends that sponsors implement a risk-based life cycle maintenance plan, including performance metrics, monitoring frequency, and triggers for retesting. Significant changes to the model or manufacturing processes may require re-executing parts of the credibility assessment, including retraining and retesting. FDA expects the life cycle maintenance plan to be incorporated into the manufacturing site’s pharmaceutical quality system and summarized in the marketing application for any AI models associated with specific products or processes, ensuring regulatory compliance and ongoing oversight. Sponsors should report any changes that impact model performance or product quality to FDA to ensure continuous regulatory compliance.
Establishing robust company-wide AI governance systems is essential for smoothly implementing regulatory expectations and improving risk management throughout the AI model lifecycle. These systems, supported by clear SOPs and policies, ensure that AI models are developed, validated, and monitored in compliance with FDA's new guidance. By embedding human governance and risk management at every stage, companies can more effectively meet regulatory requirements and reduce potential risks. Pharma and biologics companies should establish structured AI governance frameworks which support FDA's new AI guidance, ensuring effective risk management, transparency, and ongoing oversight. This includes developing standard operating procedures (SOPs) and policies for AI model development, risk assessment, and validation. Policies should include a threshold or risk matrix for determining when a particular technology qualifies as artificial intelligence, as defined by FDA, versus complex decision trees and when AI used for operational efficiencies may impact patient safety, drug quality, or the reliability of results from nonclinical or clinical studies, thereby ensuring appropriate regulatory oversight and risk mitigation.
Cross-functional teams, including experts from regulatory affairs, data science, clinical, IT, legal, and quality assurance, should collaborate to ensure that AI systems are aligned with regulatory standards and that potential issues are proactively addressed. Appointing an AI governance lead or chief AI officer helps centralize responsibility and align efforts across teams. Independent oversight mechanisms, such as advisory boards and regular audits, should be implemented to monitor AI systems throughout their lifecycle and maintain regulatory compliance.
Early engagement with FDA is a key component of this process. Sponsors should leverage programs such as the CDER Animal Model Qualification, Digital Health Technologies (DHTs) Program, Emerging Technology Program (ETP), and Model-Informed Drug Development (MIDD) Program. These engagement opportunities allow sponsors to set clear expectations, address challenges early, and ensure their AI models are compliant, improving both regulatory submissions and ongoing AI model performance.
FDA’s new guidance provides a structured and detailed approach for sponsors to navigate the complexities of using AI in drug and biologic development. By following the seven-step risk-based framework, submitting comprehensive credibility assessment plans and reports, and implementing lifecycle maintenance strategies, sponsors can ensure that their AI models meet regulatory expectations while maintaining patient safety, drug quality, and the reliability of results. This proactive approach will help sponsors mitigate risks and support FDA in its review process, fostering an environment of continued innovation and regulatory compliance in the evolving landscape of AI-driven drug development. Furthermore, establishing a strong, company-wide AI governance structure with AI leadership, cross-functional teams and robust SOPs and policies is critical to meeting these regulatory expectations and ensuring effective risk management. By embedding these practices, sponsors can enhance regulatory compliance and mitigate risks, while advancing the responsible use of AI in drug development.
Given the complexities associated with AI’s role in drug development, early engagement with FDA is crucial to navigate the regulatory landscape effectively. Sponsors who proactively consult with FDA can set clear expectations, address potential challenges early, and help ensure the responsible deployment of AI technologies. By adhering to FDA’s framework, sponsors can not only facilitate smoother regulatory processes but also contribute to the safe and effective use of AI in advancing public health.
FDA has invited comments on the draft guidance through April 7, 2025. If you may wish to submit a comment, have any questions on the implications of this guidance on your company’s business, or may need regulatory support for AI products, feel free to contact any of t he authors of this alert or the Hogan Lovells attorney or regulatory specialist with whom you regularly work.
Authored by Melissa Bianchi, Melissa Levine, Bert Lao, Alex Smith, and Ashley Grey.