2024-2025 Global AI Trends Guide
Artificial intelligence (AI) is transforming the health care landscape, from diagnostics to drug development and clinical trials. As AI technologies rapidly evolve, global regulators face the complex task of regulating AI to ensure patient safety, while still fostering innovation. In this article, we consider how existing regulations impact the use of AI in the health care space, in relation to (i) AI as a medical device, (ii) ensuring the appropriate use of health data, and (iii) the use of AI in clinical trials. We also compare the attitudes and approach of three leading health care regulators – the U.S. Food and Drug Administration (FDA), the UK Medicines and Healthcare products Regulatory Agency (MHRA), and the European Medicines Agency (EMA) – to AI, and consider the extent to which they are seeking to adapt and/or supplement existing regimes to deal with its challenges.
The annual J.P. Morgan Healthcare Conference (JPM) provides a unique opportunity to make connections among life sciences and health care emerging companies, pharmaceutical & biotechnology firms, digital health companies, investors, and advisors. The article below is part of our JPM 2025 series that aims to help keep you informed ahead of the conference on the most important global regulatory, transactional, and IP legal issues emerging today.
In many cases, existing frameworks do and will continue to regulate specific uses of AI within the lifecycle of a medicinal product or medical device. The MHRA summarized this point well in its April 2024 publication "Impact of AI on the regulation of medical products", where it stated that "many of the changes our customers make will not impact on how we regulate. The questions that we as the regulator need to ask to determine whether a product is safe do not change, when the nature of the evidence we consider changes". Existing regulatory regimes that pharmaceutical companies and medical device manufacturers work within will continue to be paramount, even as these companies increasingly apply AI solutions to their development and commercialization efforts.
However, the extent to which regulators are comfortable that existing regulatory regimes provide sufficient coverage for the specific safety risks created by artificial intelligence varies. In broad strokes, the approach of each regulator is as follows:
Notwithstanding the approach of each regulator so far, the full potential and numerous applications of AI are only just beginning to be explored and understood. Regulators across industries and across the globe are monitoring developments in AI closely and will need to be agile in modifying their approach as AI becomes more embedded within the industry and specific risks begin to crystallise. In the context of health care, three areas of particular focus for regulators at present include:
One of the most developed areas of regulation for AI in the health care space is in relation to software as a medical device (SaMD) and artificial intelligence as a medical device (AIaMD). The approach of regulators to AIaMD may provide a blueprint for future regulation in other areas.
When AI or software is used for a medical purpose (i.e., for the diagnosis or treatment of a disease or condition), it is likely to be regulated as a medical device. The overarching principles of the regulation of medical devices are similar, but not identical, across the US, EU, and UK. Medical devices are classified according to the level of risk they pose to patients, with higher risk medical devices subject to a greater level of regulatory scrutiny throughout their lifecycle. Although the regulators' approach to SaMD and AIaMD is still developing, there are commonalities beginning to emerge:
Under the existing U.S. regime (the Federal Food, Drug, and Cosmetic Act (FDCA)), SaMD and AIaMD may be approved via a number of different regulatory pathways. Pre-market approval by FDA is required for high-risk AIaMD, whereas lower-risk but novel AIaMD can be approved through FDA's streamlined de novo pathway and moderate to low risk software may be cleared via a 510(k) notification. Additionally, there are a number of software functions that are entirely excluded from regulation as well as a good number that are under enforcement discretion, meaning FDA could regulate, but has elected to not regulate due the low risk of the software application. As of the date of this article, FDA has not yet cleared/approved any product that uses enabled AI, although the agency has cleared/approved hundred of medical devices that incorporate locked algorithms. FDA’s decision-making process about which products to regulate is nuanced. FDA is mindful of concerns around overregulation, especially when it comes to software, and works hard to balance innovation with regulatory oversight.
The EU's Medical Devices Regulation (EU) 2017/745 (EU MDR) is now fully in force, following a phased introduction which ended in May 2024. Under EU MDR, SaMD and AIaMD fall into the medium-to-high risk classifications (Class IIa, Class IIb or Class III). EU MDR also introduced new post-market surveillance requirements for all medical devices which increase the obligations on manufacturers to monitor and/or report on the devices which they put onto the market. In addition to the EU MDR, the EU’s AI Act, manufacturers of medical devices may become providers of AI systems under the EU AI Act. The EU AI Act allows a single conformity assessment under EU MDR and the EU AI Act (provided that the assessment body is designated under both pieces of legislation). However, it is important to note that the EU AI Act has its own classification rules and considers medical devices and in vitro diagnostic medical devices for which the AI system is intended to be used as a safety component, or where the AI system is itself the product, automatically as high-risk (in the sense of the regulation) if they are subject to third party conformity assessment. EU MDR and EU AI Act classification can differ, i.e., the AI system used in a medical device is always considered high-risk under the EU AI Act while the classification under EU MDR will not always be equivalent. For high-risk AI systems, the special provisions of Section 2 of the EU AI Act apply, including requirements for risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness and cybersecurity.
Conversely, at present, medical devices in Great Britain (England, Scotland and Wales1) are regulated under the Medical Device Regulations 2002 ("UK MDR"). UK MDR is based on the predecessor regime to EU MDR, i.e. the repealed Directive 93/42/EEC, and was drafted at a time where the use of software for medical purposes was limited and artificial intelligence was just a theoretical possibility. As a result, many SaMD and AIaMD products currently fall into Class I: the lowest risk category for medical devices in the UK. Class I devices may be put on the market by the manufacturer without review by any third party once the manufacturer has conducted an assessment of conformity with the requirements of UK MDR. The MHRA acknowledges that the pace of development of SaMD and AIaMD has outstripped the scope of existing regulation. The MHRA is in the process of updating UK MDR and the reforms are expected to closely follow EU MDR. As a result, many manufacturers of SaMD and AIaMD in Great Britain can expect to see their products up-classified. In addition, given that the updated UK MDR will include similar post-market surveillance requirements to EU MDR, SaMD and AIaMD manufacturers in Great Britain will also find themselves under increased obligations in terms of the monitoring and reporting of safety issues relating to their devices. Beyond reforms to UK MDR, the MHRA has announced that it will reform the UK's "Yellow Card" reporting scheme for adverse events to include all incident types, including SaMD and AIaMD.
One of the differentiators of AIaMD is its adaptivity and changeability; AI systems, particularly those using machine learning, evolve over time as they process new data. Regulators are actively trying to meet this challenge with a number of strategies:
AI models are generally trained on large datasets which, for health care applications, are likely to include personal health data. Such health data will often constitute sensitive personal data for the purposes of UK and EU GDPR and HIPAA and an array of other consumer protection laws in the U.S.. Health care regulators are working directly with data protection regulators (including the Information Commissioner's Office in the UK and Office for Civil Rights in the U.S.) to address concerns around patient privacy.
Simultaneously, regulators are conscious that AI models trained on biased datasets can perpetuate health disparities. For example, algorithms trained predominantly on data obtained from one demographic may not be properly reflective of diverse patient populations.
Another collaboration between the FDA, MHRA, and Health Canada resulted in the publication of ten guiding principles for Good Machine Learning Practice (GMLP). Several of these principles focus on data quality assurance and data management, including a requirement that clinical study participants and data sets are representative of the intended patient population.
The integration of artificial intelligence (AI) into clinical trials creates numerous possibilities for identifying potential trial sites, subject recruitment, increasing adherence and retention, data analysis, trial design, and decision-making, among others. For example:
Relative to SaMD/AIaMD, the regulatory approach to the application of AI to clinical trials is less developed. However, the EMA, FDA and MHRA have each been working to establish frameworks that address the integration of artificial intelligence and machine learning technologies in clinical research. The current indication is that the approach with respect to SaMD/AIaMD will be more or less consistent, e.g., by introducing specific guidance on the applications of AI to clinical trials and their expectations in terms of how synthetic datasets or AI-driven decisions can meet evidentiary standards. For example:
We are only just beginning to understand the potential opportunities and applications of utilizing AI and machine learning within health care. These new technologies have numerous transformative possibilities, from increasing the speed of access and lowering the cost of delivery for treatments for rare and orphan diseases to improving efficiency within our broader health care systems. In this context, regulators must encourage innovation, whilst always protecting patient safety. As with all things health care, there are differences in the approach of the FDA, MHRA, and particularly, the EU. Further, specific legislation governing AI will have a role to play in health care regulation, for instance the EU AI Act. Close collaboration between industry and regulators, and the continuous development of principles-based guidance rather than a reliance on inflexible regulatory frameworks will be key to keeping pace with developments in AI technology.
Authored by Penny Powell, Jodi Scott, Matthias Schweiger, and Bea Watts.