2024-2025 Global AI Trends Guide
On January 7, 2024, the U.S. Food and Drug Administration (FDA) published the draft guidance “Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations,” which consolidates the recommendations that FDA has been providing over the years on the content needed in a marketing application pertaining to a device with AI-enabled software functions in order to facilitate FDA’s determination as to the safety and effectiveness of the device for purposes of granting a marketing authorization. It also includes recommendations for device manufacturers to consider during design and development of the AI-enabled device, consistent with the agency’s total product lifecycle (TPLC) approach. Throughout, the guidance reflects FDA’s continuous efforts to foster good machine learning practices and provision of transparency to users and encourages sponsors to be mindful of all aspects that should go into developing and managing an AI-enabled device through the product’s lifecycle, starting early in the process.
FDA seeks comments on the draft guidance through April 7.
An AI-Enabled Device Software Function (AI-DSF) is defined as a device software function that implements one or more AI models (mathematical constructs that generates an inference or prediction based on new input data) to achieve its intended purpose. In new draft guidance, FDA recognizes the differences between terminology typically used by the AI community and that defined by regulation and thus used by FDA, such as how “validation” is defined; the agency recommends that sponsors reference the FDA Digital Health and Artificial Intelligence Glossary in preparing marketing applications and communicating with FDA to ensure clear communication across all parties as this has been a source of confusion between sponsors and FDA in prior submissions.
FDA details its recommendations on the content of a premarket application across the following key topics and provides a concise table summarizing recommended organization of the information within Appendix A of the draft guidance. Additionally, FDA mentions a number of areas that may well need to be addressed in a premarket submission even though they are often thought of as post-market requirements.
1. Device Description
The overarching recommended content of the device description is consistent with current practice for medical devices with software functions. Specifically for AI-DSF, FDA recommends explicitly stating that AI is used within the device and describing how the device uses AI to achieve its intended use. In addition to explaining intended users, workflow and use environments, a description of any calibration and/or configuration procedures to maintain performance should also be provided.
2. User Interface and Labeling
As with all devices, FDA encourages sponsors to provide a holistic description of the user interface (UI) to ensure clear understanding of the device by the agency, especially if the UI is used as a risk control mechanism. The desire for transparency and clear communication is echoed in the labeling recommendations, where FDA details the types of information that should be shared with users.
3. Risk Assessment
Consistent with Premarket Software Guidance (2023), FDA states that a comprehensive risk management file should be provided as part of a premarket application and should consider the full continuum of use of the device. The agency further notes that there may well be instances in which a performance monitoring plan should be included in the risk management file, including as a special control for a 510(k) or de novo or as a condition of approval for a PMA. FDA also encourages sponsors to reference FDA-recognized voluntary consensus standards specific to software, such as AAMI CR34971 Guidance on the Application of ISO 14971 to Artificial Intelligence and Machine Learning.
4. Data Management
The guidance emphasizes that a clear explanation of data management practices, as well as characterization of data used for development and validation, is essential for the FDA review team to understand the characteristics of an AI-enabled device. FDA discusses the importance of data management to promote generalizability of AI models to the intended use population(s) and to identify and mitigate biases. Consistent with FDA’s recent recommendations in other forums and to individual sponsors, the guidance stresses separation and independence between development (training and tuning) and test datasets (e.g., collection of data from completely different clinical sites) as well as the need to show that validation data sufficiently represents the device’s intended use (target) population and proposed indications for use. For the latter, FDA references several other recently published guidances (and draft guidances).
5. Model Description and Development
FDA explains that the description of the model and its development process go beyond the general device description to provide specific information about the technical characteristics of the model and the methods used in development. This includes, for example, details on how the model was trained and how any thresholds were determined.
6. Validation
FDA recommends that AI-enabled device manufacturers demonstrate users’ ability to interact with and understand the device in addition to ensuring the device itself meets relevant performance specifications. The draft guidance describes considerations for identifying the appropriate methods for performance testing of a particular device, emphasizing that sponsors should consider both standalone testing and human-device team performance evaluations for overall device validation. The agency reiterates that protocols and statistical analysis plans for validation should be pre-specified to ensure robustness and reliability. Finally, focusing on ensuring adequate device performance across the intended use population, FDA stresses the importance of appropriate subgroup analysis as a tool not only to support generalizability but also to identify – and subsequently inform users of – potential limitations.
7. Device Performance Monitoring
Under the Quality System Regulation (QSR), manufacturers have always been required to monitor device performance after release to the field; however, FDA notes that AI-enabled devices are uniquely susceptible to changing or degrading over time since AI models can be particularly sensitive to changes in data inputs. Accordingly, FDA encourages sponsors of devices with AI-DSF to proactively monitor, identify, and address performance changes post-commercialization, as well as any other device modifications that may impact performance. While it is not typical to assess QSR compliance as part of a 510(k) notice, FDA notes that providing monitoring details may be helpful in certain situations and that such details may well be required for de novo and PMA filings. The draft guidance encourages obtaining FDA feedback should a sponsor elect to incorporate proactive performance monitoring as a means of risk control and/or to support their substantial equivalence argument.
8. Cybersecurity
As with any device that includes software, the agency recommends sponsors to follow cybersecurity recommendations outlined in Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions (2023), with additional comments on specific AI risks to consider in developing cybersecurity controls. The emphasis on cybersecurity is not new, and greater scrutiny by FDA on cybersecurity throughout the TPLC is anticipated.
9. Public Submission Summary
Consistent with its effort to increase transparency, FDA provides recommendations on content of public facing submission summaries, such as 510(k) summaries. Specifically, Appendix F provides an example of a 510(k) summary with a Model Card.
The draft guidance is expected to encourage consistency across marketing applications for devices with AI-DSF, as well as sponsor interactions with the agency, also reflecting a desire by FDA to collaborate with industry in this rapidly evolving area (as was discussed in the recent FDA Digital Health Advisory Committee meeting, as we summarized online here). While technology will always outpace regulation, a trend perhaps most distinctly seen in the software space, this guidance – one of several AI-focused guidance documents that FDA has issued in the waning days of the Biden Administration – provides a measure of additional clarity on agency expectations for AI-DSF device submissions.While certain of the specifics may well be challenged by stakeholders, the content of the guidance closely reflects the recommendations that FDA has been issuing – formally, informally, and through review of individual submissions – since the mainstream integration of AI into medical devices, primarily aggregating previously disparate comments rather than introducing novel requirements or expectations.
FDA is seeking comments on the draft guidance by April 7, 2025.
Hogan Lovells has been assisting clients in navigating the FDA regulatory process for AI-enabled devices throughout its evolution over the last decade. If you have questions or would like us to help you submit comments on the draft guidance, please contact one of the authors of this alert or the Hogan Lovells attorney with whom you normally work.
Authored by Jodi K. Scott, Kelliann Payne, Suzanne Levy Friedman, and Eriko Yoshimaru