Insights and Analysis

JPM2025: Regulation of artificial intelligence: Navigating a new frontier in health care

Pathology slide review: Biomedical analyst in a detailed image, identifying disease patterns.
Pathology slide review: Biomedical analyst in a detailed image, identifying disease patterns.

Artificial intelligence (AI) is transforming the health care landscape, from diagnostics to drug development and clinical trials. As AI technologies rapidly evolve, global regulators face the complex task of regulating AI to ensure patient safety, while still fostering innovation. In this article, we consider how existing regulations impact the use of AI in the health care space, in relation to (i) AI as a medical device, (ii) ensuring the appropriate use of health data, and (iii) the use of AI in clinical trials. We also compare the attitudes and approach of three leading health care regulators – the U.S. Food and Drug Administration (FDA), the UK Medicines and Healthcare products Regulatory Agency (MHRA), and the European Medicines Agency (EMA) – to AI, and consider the extent to which they are seeking to adapt and/or supplement existing regimes to deal with its challenges.

The annual J.P. Morgan Healthcare Conference (JPM) provides a unique opportunity to make connections among life sciences and health care emerging companies, pharmaceutical & biotechnology firms, digital health companies, investors, and advisors. The article below is part of our JPM 2025 series that aims to help keep you informed ahead of the conference on the most important global regulatory, transactional, and IP legal issues emerging today.

The overall regulatory approach 

In many cases, existing frameworks do and will continue to regulate specific uses of AI within the lifecycle of a medicinal product or medical device. The MHRA summarized this point well in its April 2024 publication "Impact of AI on the regulation of medical products", where it stated that "many of the changes our customers make will not impact on how we regulate. The questions that we as the regulator need to ask to determine whether a product is safe do not change, when the nature of the evidence we consider changes". Existing regulatory regimes that pharmaceutical companies and medical device manufacturers work within will continue to be paramount, even as these companies increasingly apply AI solutions to their development and commercialization efforts. 

However, the extent to which regulators are comfortable that existing regulatory regimes provide sufficient coverage for the specific safety risks created by artificial intelligence varies. In broad strokes, the approach of each regulator is as follows: 

  • FDA’s approach is pragmatic under existing statutory authority where the agency has been gradually adapting its approach to regulating AI technologies, particularly in the context of health care, where AI tools are used in diagnostics, treatment planning, as therapeutics, and other applications. FDA's approach to AI regulation is evolving, with the same focus on ensuring safety, effectiveness, and transparency while maintaining flexibility to accommodate the rapidly advancing field of AI technology. In October, FDA leadership published in JAMA, “FDA Perspective on the Regulation of Artificial Intelligence in Health Care and Biomedicine” presenting its view of the agency’ regulation of AI historically and into the future; we summarized that view online here.
  • In comparison, the EU is taking a relatively prescriptive approach that seeks to be balanced and ethical, prioritizing innovation, patient safety, and data protection. Key elements of the EU approach include requiring AI tools in general to comply with strict standards under frameworks like the EU AI Act. In addition, sector-specific regulations like the Medical Device Regulation (MDR) or the Directive 2001/83/EC on the Community code relating to medicinal products for human use must be observed. The EU is taking an industry-agnostic approach to promoting transparency and reliability; upholding principles like fairness, accountability, and respect for fundamental rights; ensuring patient trust in AI systems; protecting sensitive health data under the General Data Protection Regulation (GDPR), emphasizing secure and lawful data usage for all applications in the scope of the regulation. Additional sector-level regulations and guidance give further shape for specific applications. The EU seeks to encourage research, development, and deployment of AI technologies in health care through funding and partnerships, such as the Horizon Europe program.  This approach aims to harness AI's potential for improving health care outcomes while minimizing risks to individuals and society. To foster safety, the EU has already provided targeted legislation providing for regulatory and civil liability,  through prohibitions, obligations and penalties under the EU AI Act and no-fault product liability for software/AI applications with the new Product Liability Directive (Directive (EU) 2024/2853 on liability for defective products). 
  • The MHRA has taken a relatively light touch and "pro-innovation" approach so far, as set out in its AI regulatory strategy, which we wrote about online here. 

Notwithstanding the approach of each regulator so far, the full potential and numerous applications of AI are only just beginning to be explored and understood. Regulators across industries and across the globe are monitoring developments in AI closely and will need to be agile in modifying their approach as AI becomes more embedded within the industry and specific risks begin to crystallise. In the context of health care, three areas of particular focus for regulators at present include: 

  1. AI as a medical device

One of the most developed areas of regulation for AI in the health care space is in relation to software as a medical device (SaMD) and artificial intelligence as a medical device (AIaMD). The approach of regulators to AIaMD may provide a blueprint for future regulation in other areas. 

When AI or software is used for a medical purpose (i.e., for the diagnosis or treatment of a disease or condition), it is likely to be regulated as a medical device. The overarching principles of the regulation of medical devices are similar, but not identical, across the US, EU, and UK. Medical devices are classified according to the level of risk they pose to patients, with higher risk medical devices subject to a greater level of regulatory scrutiny throughout their lifecycle. Although the regulators' approach to SaMD and AIaMD is still developing, there are commonalities beginning to emerge: 

  • SaMD and AIaMD are generally treated as medium-to-high risk 

Under the existing U.S. regime (the Federal Food, Drug, and Cosmetic Act (FDCA)), SaMD and AIaMD may be approved via a number of different regulatory pathways. Pre-market approval by FDA is required for high-risk AIaMD, whereas lower-risk but novel AIaMD can be approved through FDA's streamlined de novo pathway and moderate to low risk software may be cleared via a 510(k) notification. Additionally, there are a number of  software functions that are entirely excluded from regulation as well as a good number that are under enforcement discretion, meaning FDA could regulate, but has elected to not regulate due the low risk of the software application. As of the date of this article, FDA has not yet cleared/approved any product that uses enabled AI, although the agency has cleared/approved hundred of medical devices that incorporate locked algorithms.  FDA’s decision-making process about which products to regulate is nuanced. FDA is mindful of concerns around overregulation, especially when it comes to software, and works hard to balance innovation with regulatory oversight.

The EU's Medical Devices Regulation (EU) 2017/745 (EU MDR) is now fully in force, following a phased introduction which ended in May 2024. Under EU MDR, SaMD and AIaMD fall into the medium-to-high risk classifications (Class IIa, Class IIb or Class III). EU MDR also introduced new post-market surveillance requirements for all medical devices which increase the obligations on manufacturers to monitor and/or report on the devices which they put onto the market. In addition to the EU MDR, the EU’s AI Act, manufacturers of medical devices may become providers of AI systems under the EU AI Act. The EU AI Act allows a single conformity assessment under EU MDR and the EU AI Act (provided that the assessment body is designated under both pieces of legislation). However, it is important to note that the EU AI Act has its own classification rules and considers medical devices and in vitro diagnostic medical devices for which the AI system is intended to be used as a safety component, or where the AI system is itself the product, automatically as high-risk (in the sense of the regulation) if they are subject to third party conformity assessment. EU MDR and EU AI Act classification can differ, i.e., the AI system used in a medical device is always considered high-risk under the EU AI Act while the classification under EU MDR will not always be equivalent. For high-risk AI systems, the special provisions of Section 2 of the EU AI Act apply, including requirements for risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness and cybersecurity. 

Conversely, at present, medical devices in Great Britain (England, Scotland and Wales1) are regulated under the Medical Device Regulations 2002 ("UK MDR"). UK MDR is based on the predecessor regime to EU MDR, i.e. the repealed Directive 93/42/EEC, and was drafted at a time where the use of software for medical purposes was limited and artificial intelligence was just a theoretical possibility. As a result, many SaMD and AIaMD products currently fall into Class I: the lowest risk category for medical devices in the UK. Class I devices may be put on the market by the manufacturer without review by any third party once the manufacturer has conducted an assessment of conformity with the requirements of UK MDR. The MHRA acknowledges that the pace of development of SaMD and AIaMD has outstripped the scope of existing regulation. The MHRA is in the process of updating UK MDR and the reforms are expected to closely follow EU MDR. As a result, many manufacturers of SaMD and AIaMD in Great Britain can expect to see their products up-classified. In addition, given that the updated UK MDR will include similar post-market surveillance requirements to EU MDR, SaMD and AIaMD manufacturers in Great Britain will also find themselves under increased obligations in terms of the monitoring and reporting of safety issues relating to their devices. Beyond reforms to UK MDR, the MHRA has announced that it will reform the UK's "Yellow Card" reporting scheme for adverse events to include all incident types, including SaMD and AIaMD.

  • Regulators are developing specific strategies for regulating AIaMD

One of the differentiators of AIaMD is its adaptivity and changeability; AI systems, particularly those using machine learning, evolve over time as they process new data. Regulators are actively trying to meet this challenge with a number of strategies: 

  • Regulatory sandboxes: A regulatory sandbox permits industry members to test ideas outside of normal regulatory processes, in collaboration with the regulator. For example, the MHRA recently announced the "AI Airlock," a regulatory sandbox that we summarized online here, which will enable five manufacturers with promising innovative AIaMD products to work with the MHRA to identify challenges in the regulation of AIaMD and develop new strategies for regulating AIaMD. 
  • Engagement with industry: FDA is working with industry stakeholders through collaborative programs to improve its understanding of AI and ML in medical devices, including AI/ML Working Groups such as the Medical Device Innovation Consortium (MDIC), to study how AI and ML should be regulated and Public-Private Partnerships including collaborations with academic institutions, tech companies, and health care providers to gather input on AI regulation and identify potential areas for further guidance.
  • Change control: Acknowledging the adaptability of AIaMD products, the FDA, MHRA, and Health Canada have developed the concept of a pre-determined change control plan (PCCP); for example, we summarized FDA’s PCCP regulatory paradigm online here. A PCCP establishes the guardrails for future defined changes to software. Provided the SaMD/AIaMD continues to develop within these guardrails, it will continue to be authorized within the scope of its original approval. This prevents the need for re-authorization of the SaMD/AIaMD following every development; although there will continue to be a need for reassessment for changes outside of the PCCP.
  • Emphasis on guidance: The FDA and MHRA have each released guidance intended to supplement the existing regulatory frameworks with specific guidelines covering SaMD and AIaMD. The publication of new, often principles-based, guidance will help regulators to respond in an agile manner to future developments in AI, especially when compared to the use of traditional legislative mechanisms.
  • Additional legislation: Regulators recognise that new legislation may be required to address the risks associated with the emergence of AI. The EU has already developed the EU AI Act. The MHRA's Software and AI as a Medical Device Change Programme - Roadmap identifies a number of problems with existing regulation and the MHRA's proposed solutions and next steps; these include the publication of guidance, supplemented by additional legislation where necessary (for instance the reforms to UK MDR).  FDA recognizes that it may well need additional statutory authority to regulate AI as it continues to evolve and there are state led initiatives to enact legislation that may well impact the use of AI in SaMD.
  1. Ensuring appropriate use of health data

AI models are generally trained on large datasets which, for health care applications, are likely to include personal health data. Such health data will often constitute sensitive personal data for the purposes of UK and EU GDPR and HIPAA and an array of other consumer protection laws in the U.S.. Health care regulators are working directly with data protection regulators (including the Information Commissioner's Office in the UK and Office for Civil Rights in the U.S.) to address concerns around patient privacy. 

Simultaneously, regulators are conscious that AI models trained on biased datasets can perpetuate health disparities. For example, algorithms trained predominantly on data obtained from one demographic may not be properly reflective of diverse patient populations.  

Another collaboration between the FDA, MHRA, and Health Canada resulted in the publication of ten guiding principles for Good Machine Learning Practice (GMLP). Several of these principles focus on data quality assurance and data management, including a requirement that clinical study participants and data sets are representative of the intended patient population. 

  1. Use of AI in clinical trials

The integration of artificial intelligence (AI) into clinical trials creates numerous possibilities for identifying potential trial sites, subject recruitment, increasing adherence and retention, data analysis, trial design, and decision-making, among others. For example: 

  • The capacity of AI tools to analyse vast datasets is being used to optimize trial design, identify patient subgroups, and highlight potential new indications for approved medicinal products.
  • AI is being used to create "synthetic" datasets, which take large volumes of real world data and create an artificial dataset which is representative of real patients. Synthetic data could, in some circumstances, remove the need for a control group in a clinical trial by creating a so-called "digital twin" for the trial participants. By removing the need for a traditional placebo, fewer patients would miss out on potentially life-improving medication when participating in a trial. For rare and orphan diseases, where the costs of conducting a traditional clinical trial are high and there are often difficulties in recruiting sufficient patient numbers, the use of synthetic data could tackle barriers to entry by improving the quantity and quality of trial data whilst also reducing costs.
  • AI software can also be implemented to increase participant adherence and retention by using smartphone reminders or alerts, e-tracking medication tools, or non-adherence alerts sent to electronic platforms; more advanced software may be capable of identifying face or voice “digital biomarkers” to remotely track adherence; or, on the extreme end, identify instances in which a patients’ condition is improving or progressing coupled with evidence of noncompliance with treatment protocols. 

Relative to SaMD/AIaMD, the regulatory approach to the application of AI to clinical trials is less developed. However, the EMA, FDA and MHRA have each been working to establish frameworks that address the integration of artificial intelligence and machine learning technologies in clinical research. The current indication is that the approach with respect to SaMD/AIaMD will be more or less consistent, e.g., by introducing specific guidance on the applications of AI to clinical trials and their expectations in terms of how synthetic datasets or AI-driven decisions can meet evidentiary standards. For example: 

  • The EMA's reflection paper on the use of artificial intelligence (AI) outlines considerations for the development and validation of AI-based technologies used in health care, including clinical trials. The guidance emphasizes the importance of transparency, data quality, risk management, and ethical considerations when deploying AI systems. For each application in a clinical trial, the regulatory impact should be assessed. Additionally, the GDPR also plays a crucial role in regulating how personal data is handled within clinical trials that utilize AI. Compliance with GDPR ensures that patient privacy and data protection are maintained while using AI technologies.
  • FDA has laid out a comprehensive discussion paper focusing on the pivotal role of Artificial Intelligence (AI) and Machine Learning (ML) in the realm of drug development and clinical trials exploring the various ways in which AI and ML can streamline processes, from optimizing participant selection to revolutionizing data management and analysis.
  • Last year, FDA discussed an example where the Agency provided feedback on a sponsor’s proposal to use digital twins in clinical trials of an investigational drug.  The sponsor had proposed using digital twins to generate “patient prognostic scores” using baseline variables to predict potential placebo outcomes, thereby enabling a reduction of placebo sample sizes in the sponsor’s phase 2 and phase 3 clinical trials.  FDA evaluated the sponsor’s digital twin proposal by applying a risk-based credibility assessment framework consisting of two factors:
    • Model influence: The influence of the AI model on decision-making.  This considers whether other data is available aside from the AI model to base a decision, and how heavily weighted the AI model is in the decision.
    • Decision consequence: This considers whether there are negative consequences if the AI model makes an incorrect decision.
  • In the UK, the Health Research Authority (which regulates health and social care research) is leading on two projects intended to streamline the review of: (1) AI and data-driven research by modernising the technology platform used to make applications for approvals; and (2) research using confidential patient information without consent. 

Conclusions

We are only just beginning to understand the potential opportunities and applications of utilizing AI and machine learning within health care. These new technologies have numerous transformative possibilities, from increasing the speed of access and lowering the cost of delivery for treatments for rare and orphan diseases to improving efficiency within our broader health care systems. In this context, regulators must encourage innovation, whilst always protecting patient safety. As with all things health care, there are differences in the approach of the FDA, MHRA, and particularly, the EU. Further, specific legislation governing AI will have a role to play in health care regulation, for instance the EU AI Act. Close collaboration between industry and regulators, and the continuous development of principles-based guidance rather than a reliance on inflexible regulatory frameworks will be key to keeping pace with developments in AI technology.

Authored by Penny Powell, Jodi Scott, Matthias Schweiger, and Bea Watts.

  1. Following Brexit, pursuant to the Windsor Framework, Northern Ireland continues to be subject to EU regulation. Medical devices put on the market in Northern Ireland are subject to EU MDR.
  2. Qi Liu, Artificial Intelligence/Machine Learning: The New Frontier of Drug Development & Regulation, REDI ANNUAL CONFERENCE (May 30, 2024), https://sbiaevents.com/redi2024.

View more insights and analysis

Register now to receive personalized content and more!