2024-2025 Global AI Trends Guide
In the EU, the AI Act has been approved, carrying significant implications for medical device, technology, and pharmaceutical firms. In the US, there is no comprehensive federal AI legislation, but at least 45 states have introduced bills that would regulate AI, with 31 having enacted such legislation. Meanwhile in the UK, the government has adopted an informal cross-sector, principles and outcome-based framework for regulating AI, requiring international firms to prepare for a patchwork of AI regulatory guidelines and enforcement. Speaking at the J.P. Morgan Healthcare Conference this year, Hogan Lovells regulatory partners Jodi Scott, Penny Powell, and Dr. Matthias Schweiger compared emerging international AI regulatory concepts for which companies must be prepared, including clinical trials regulations, corporate governance and compliance policies, the EU’s AI Act, the UK’s AI Regulatory Strategy, new FDA AI guidance documents, and other trends summarized below.
Kicking off the panelists’ Monday afternoon conversation with a summary of AI regulations in the EU, Dr. Matthias Schweiger, partner in the Hogan Lovells Litigation, Arbitration, and Employment practice, provided background how the European Union was among the first regulatory bodies to formalize governmental rules applicable to artificial intelligence. From the beginning, companies operating in the EU should assess whether the AI Act may apply to them, he explained. Dr. Schweiger speculated that because the regulations have become particularly granular, it may become difficult for regulatory bodies to roll back portions of those rules. This may become a disadvantage in a competitive environment, in particular when strong economies come up with less stringent regulations. On the other hand, companies planning to invest in AI in the EU have legal certainty to some degree, Dr. Schweiger said.
The UK has taken a markedly different approach, continued Penny Powell, partner in the Hogan Lovells Strategic Operations, Agreements and Regulation practice, describing how the United Kingdom’s Medicines and Healthcare products Regulatory Agency (MHRA) is utilizing a pro-innovation principles-based approach as opposed to the EU’s risk-based model. Yet, 2025 is expected to be a significant year for change in the UK’s Medical Device and AI regulations, as new legislation should lead to increased harmonization of the rules between the UK and EU, Powell predicted. Indeed, the MHRA published a strategic approach to AI and the steps it is taking to implement the UK Government's AI White Paper, as Powell summarized online here.
Summarizing AI regulations in the US, Jodi Scott, partner in the Hogan Lovells Medical Device & Technology Regulatory practice, suggested that Food and Drug Administration (FDA) may need greater authority from Congress to regulate AI, even though the agency “has what authority it needs” to approve the hundreds of AI-designated devices that have already been FDA-approved.
Scott predicted that the US government agency will start approving even more AI-enabled devices in the near future. Indeed, Scott pointed out, FDA has been “very busy” trying to help companies understand what is expected of them under US law, including releasing a draft guidance on AI-enabled medical devices just last week, which she summarized online here. Scott also provided an analysis of a recent JAMA article that described FDA officials’ concerns with the use of AI in medical product development, clinical research, and clinical care (more on that online here). Comparing the US approach to individual AI product approvals in the EU, Dr. Schweiger said that although the EU may have been a frontrunner in enacting AI regulations, it may fall behind on getting specific AI-enabled medicinal products to market if the capacities for the necessary third party conformity assessment do not become available in time.
The panelists agreed that one area where AI is being utilized in a novel way across the globe is the use of “digital twins” in clinical trials. Scott cited how FDA previously outlined the appropriate use of digital twins in the development of drug and biological products, describing digital twins as in silico representations or replicas of an individual that can dynamically reflect molecular and physiological status over time. FDA has explained that digital twins can aid clinical trial analysis by providing “a comprehensive, longitudinal, and computationally generated clinical record that describes what may have happened to that specific participant if they had received a placebo.” Similarly, the UK’s MHRA has said they want to increase the use of digital twins, and the EU also published a paper on the use of AI in clinical trials.
Further demonstrating the evolving nature of the AI regulatory paradigms in the UK, Powell outlined how the MHRA has announced the five technologies selected to participate in the “AI Airlock Pilot” for AI-powered medical devices, as summarized online here. Powell described how the pilot is a regulatory “sandbox” for manufacturers to work together with the MHRA, National Health Service, and other approved bodies to better understand the risks associated with AI devices and the regulatory challenges faced by manufacturers when bringing these products to market under the current medical device framework.
Shifting the conversation to consider the contracting space, Powell observed a recent trend toward increased diversity in terms of the warranties provided for AI-related deals. However, many firms are still relying on traditional compliance warranties that she said “may not be sufficient to future-proof” those arrangements from a risk mitigation standpoint.
Dr. Schweiger agreed that these contract concerns apply in the EU, highlighting how the AI Act requires a significant amount of technical documentation, as well as access to data to satisfy the concerns of regulatory bodies. Similarly, Powell emphasized the importance of good governance and compliance with AI rules in the UK. “As the world shrinks, companies will want to consider how to set themselves up to satisfy the rules of all government regulators,” Scott advised.
The panelists also considered whether a high level of granularity within the emerging AI regulations may restrict innovation at the cost of providing clear guidance on what is required to comply with the evolving laws. Powell noted that the UK’s new Chief Technology Officer is tasked with finding ways to enhance digital transformation and to support the AI industry as a whole.
“The next few years are going to be exciting,” the panelists agreed.
The annual J.P. Morgan Healthcare Conference (JPM) provides a unique opportunity to make connections among life sciences and health care emerging companies, pharmaceutical & biotechnology firms, digital health companies, med tech sponsors, investors, and advisors. The article above is part of our JPM 2025 “Fireside Chat” series of presentations, through which our team of attorneys spoke with stakeholders at the conference about the most critical global health care issues emerging today.
Authored by Jodi Scott, Penny Powell, and Matthias Schweiger.