
Reflecting on President Trump’s first 100 days in office
The U.S. Food and Drug Administration (FDA) has announced the completion of its first artificial intelligence-assisted (AI-assisted) scientific review pilot and appears to be moving fast to implement generative artificial intelligence tools across all centers by June 30, 2025, as stated by FDA Commissioner Dr. Martin Makary. The broad outlines of this proposal, including the timing of the roll-out, were affirmed by Commissioner Makary in comments made to the Food and Drug Law Institute’s annual meeting in Washington, DC, on May 15.
If the initiative proceeds as planned, it will mark a pivotal shift in the regulatory landscape, introducing both efficiencies and novel questions related to the reliability and validity of the AI reviews, as well as considerations for sponsors seeking to protect proprietary data and ensure regulatory compliance in an AI-integrated environment. As part of this transformation, FDA will deploy secure generative AI systems across its internal data platforms, aiming to streamline repetitive review tasks, accelerate timelines, and reduce the administrative burden on scientific reviewers. Although this modernization effort holds substantial promise for public health, it also raises critical questions regarding regulatory transparency and reliability, data security, and sponsor confidentiality.
Following FDA’s first AI-assisted medical product review, the agency has committed to expanding the use of generative AI across all centers by June 30, 2025. Commissioner Makary emphasized the urgency of modernization, posting on X:
“Why does it take over 10 years for a new drug to come to market? Why are we not modernized with AI and other things? We’ve just completed our first AI-assisted scientific review for a product and that’s just the beginning."
Regarded as a “game changer” by Jinzhong (Jin) Liu, Deputy Director of the Office of Drug Evaluation Sciences within FDA’s Center for Drug Evaluation and Research (CDER), the AI-assisted review enabled completion of tasks in minutes that previously took days.
FDA now plans to expand AI use beyond data review to include application completeness checks, document summarization, data extraction, and analysis for consistency across submissions.
As this transition rolls out, sponsors will need to prepare for new issues and risks associated with AI-assisted reviews, including reduced transparency during application review, potential bias, and challenges in identifying and contesting AI-generated findings.
As FDA moves forward, it will need to be mindful of the existing statutory and regulatory framework, its legal obligations in conducting application reviews, and the permissible agency uses of sponsor and patient data. The implementation of FDA’s generative AI system will be led by Jeremy Walsh, FDA’s new chief AI officer and Sridhar Mantha, who recently led the Office of Business Informatics in FDA’s Center for Drug Evaluation and Research.
As FDA integrates generative AI into the regulatory review process, sponsors should the emerging legal, security, and transparency risks associated with this shift. One primary area of concern relates to responding to AI-based rejections or adverse findings. FDA’s use of AI in application reviews introduces new risks to the fairness and transparency of regulatory decisions.
For example, if FDA rejects or delays action on a marketing application based on AI-generated analysis, sponsors could face challenges such as the following:
These risks underscore the importance of AI system validation, the traceability and reproducibility of AI data assessments, model interpretability, and clear agency communication when AI is used in regulatory determinations.
Despite the agency’s efforts to make product application review faster by leveraging AI, this is novel legal terrain. It is not clear what data the AI tools are being trained on or how this machine learning will be deployed in regulatory decision-making. But there may be limits to FDA’s authority to rely on such data in the context of approving a marketing application. The agency’s legal authority to rely on non-sponsor data to reach regulatory conclusions – data that the sponsor does not own or have a right of reference to – may be limited under current law to applications submitted under 505(b)(2) of the Food, Drug, and Cosmetic Act and 351(k) of the Public Health Service Act.
Similarly, when it comes to protecting information from public disclosure, there are also new questions. FDA is required to protect trade secrets and confidential commercial information (CCI) under the Trade Secrets Act, the Freedom of Information Act (FOIA), and the agency’s implementing regulations for those statutes. However, generative AI introduces new challenges in how FDA protects, accesses, processes, uses, and discloses trade secret information and CCI. In addition to data ownership and use issues, there are specific FOIA-related risks. Within this context, sponsors should anticipate that their data may be used and analyzed by AI systems and pay renewed attention to marking confidential submissions as “Confidential Commercial Information / Trade Secret.” Given that AI-assisted systems may generate application review summaries or indexes, sponsors should take all available opportunities offered by FDA to propose redactions to these documents before they are released through FOIA.
FDA’s proposed agency-wide deployment of generative AI by June 30, 2025, will represent a pivotal change in regulatory reviews. Although this modernization promises efficiency, it also brings novel legal, data security, and transparency challenges, which are likely to impact sponsors and the products they develop in ways that may be difficult to predict at this time. Hogan Lovells will continue to closely monitor FDA’s rollout of AI systems for application reviews and is available to assist sponsors with questions they may have about the agency’s use of these systems.
Authored by Robert Church, Mike Heyl, Jason Conaty, and Ashley Grey.