Hogan Lovells 2024 Election Impact and Congressional Outlook Report
Access to first-in-class medicinal products that can save and transform lives is crucial for patients in the UK – and the earlier the access, the more significant the impact. Equally vital, however, is that such products are reliably evaluated, before and after marketing-authorisation is granted, for safety and efficacy in a manner that is fair, robust, and transparent. With this critical balance in mind, this article will explore the opportunities and challenges presented by the UK's bold, pro-innovation approach that seeks to streamline the regulatory processes governing patient access to medicinal products through the integration of artificial intelligence.
The annual J.P. Morgan Healthcare Conference (JPM) provides a unique opportunity to make connections among life sciences and health care emerging companies, pharmaceutical & biotechnology firms, investors, and advisors. The article below is part of our JPM 2025 series that aims to help keep you informed ahead of the conference on the most important global regulatory, transactional, and IP legal issues emerging today.
The new Labour government has taken a pro-innovation stance with respect to artificial intelligence. In its Life Sciences Strategy, published before the election, it outlined the potential role of a new Regulatory Innovation Office (RIO), which would ensure that regulators, such as the Medicines and Healthcare products Regulatory Agency (MHRA), are accountable for driving innovation, and in doing so it promised to introduce targets for regulatory approval timelines. Since its election victory, the Labour government has launched the new RIO. One of the RIO's four key initial focusses is the use of AI in healthcare, in order to "revolutionise healthcare delivery so doctors can diagnose illnesses faster and improve patient care". However, the RIO has a broader mandate, with the long-term goal of working directly with regulators to assist them in building the capacity to speed up approvals, cut red-tape, and generally streamline their regulatory processes.
This approach is well-aligned with the MHRA's own plans with respect to the innovative use of AI to optimise regulatory processes, as discussed in our previous article, UK MHRA Publishes AI Regulatory Strategy ("MHRA AI Strategy"). The MHRA AI Strategy was released in response to the previous government's AI White Paper. However, given the Labour government's pro-innovation approach to AI, and the future accountability that the RIO is likely to apply to the MHRA with respect to its regulatory approvals process, it is highly likely that the MHRA AI Strategy will continue to be relevant. Moreover, in the MHRA's Business Plan for 2024/25, despite not expressly referring to the integration of AI into regulatory processes, various of its high-level targets are consistent with a move towards AI optimisation, such as i) delivering innovative pathways to transformative medicines, ii) launching new digital tools to improve the delivery of regulatory services, and iii) improving UK regulatory frameworks in lockstep with evolving science and technology, thereby improving processes and removing unnecessary burdens.
Finally, the MHRA has recently cemented its pro-AI ambitions with respect to its regulatory processes in its recent Data Strategy, in which a key theme is exploring how to "Safely and responsibly harness the potential of artificial intelligence and advanced analytics throughout the product lifecycle", and to "improve our operational performance by harnessing the potential of data to enhance the timeliness, transparency, and predictability of our decision-making".
The MHRA has a proven track record of successfully using AI tools to improve patient outcomes. For example, the MHRA successfully used AI in its vigilance systems to increase the efficiency and effectiveness of the monitoring of adverse events in relation to the COVID-19 vaccine. It ensured that adverse event reports had the free text coded to structured fields for signal detection – in total, more than 100,000 reports were used to train the system, with overarching rules implemented to ensure adequate controls.
Building on this use case, the MHRA's Business Plan for 2024/2025 refers to the delivery of its SafetyConnect programme by Q4 of 2025. This programme will assist the MHRA in its market-surveillance duties, by helping it to detect and act on safety issues. This programme could build on the use-case of AI as a tool for enhancing the MHRA's vigilance systems, in furtherance of the MHRA's goals under its recent Data Strategy to explore how natural language processing tools can enhance pharmacovigilance systems and operations, and to evaluate how novel analytical methodologies can detect adverse event signals. This use case could detect patterns of adverse events, including with respect to the interaction of different drugs in patients, in a manner which is vastly more expedited than in traditional, non-AI integrated, systems.
Another promising use case relates to the processing of applications for marketing authorisations for new medicinal products. Currently when the MHRA initially assesses an application for marketing authorisation, its human assessors conduct assessments with respect to the consistency, completeness and quality of the data provided under an application. However, under the MHRA AI Strategy, the MHRA is exploring how it can catalyse this process via the introduction of supervised machine learning – this form of AI would learn from labelled data and make classifications and predictions based on learned patterns and rules. The idea is that AI would score, or provide recommendations with respect to, each of the MHRA’s criterions, thereby reducing the need for human input at this early stage of the application process. Further, the new MHRA Data Strategy suggests that AI could also assist in the later stages of the application process, by analysing real-world data (such as electronic health records and registry data) to generate real-world evidence, in order to reduce the ambiguity that the MHRA faces in its evaluation of the risk / benefit profile of new medicinal products. Ultimately, this would increase the speed of delivery of safe new medicinal products to patients, and unlock further capacity within the MHRA to focus on its other strategic priorities, such as engaging with patients, healthcare professionals and industry members.
Thirdly, the MHRA AI Strategy, and the MHRA Data Strategy, each propose that AI could assist in the screening and prevention of the fraudulent sale of medicinal products online. The MHRA has provided early evidence that this approach has real world value, for example, by working in conjunction with eBay's own AI algorithm to block over 500,000 unregulated medicinal products from being marketed to the public. Given that the MHRA seized more than 15.5 million doses of illegally traded medicines in 2023 alone, this use case could offer a substantial benefit to both public safety and public trust in marketed medicinal products in an increasingly digital ecosystem.
The MHRA's main priority will always be to promote the safety, quality and efficacy of the medicinal products, medical devices, and blood components that it regulates. The use cases above demonstrate the ways in which AI could bolster the MHRA's existing work. However, the MHRA is expected to proceed with caution when embracing new technologies, especially where there is the potential for consequences to patient safety. The MHRA and the RIO must ensure that any AI-integrated regulatory tools are carefully designed with the inherent challenges that AI poses in mind.
To the extent that complex algorithms are either making, or even merely shaping, time-critical decisions, such as with respect to marketing authorisation applications, the illegal online marketing of drugs, or surveillance of potential adverse events, it is crucial that the decision-making process is sufficiently transparent. In particular, if the reasons underpinning a decision are lost to the 'black box' nature of the AI tools integrated within the regulatory processes, decisions could ultimately potentially be subject to judicial review on the ground of procedural impropriety, since the MHRA has a duty, as a public body, to provide reasons for its decisions. Further, if the reasons underpinning AI-informed decisions were lost to this 'black box', they could also be subject to review on the ground of irrationality, given that the MHRA would be unable to explain which relevant considerations it has, and irrelevant considerations it has not, taken into account when coming to a decision on, for example, the failure to approve a medicinal product.
It is imperative to ensure that the underlying data used to train and validate any AI tools is accurate and representative of different patient groups. This should be at the forefront of any AI integration policy as the MHRA AI Strategy acknowledges that AI potentially holds an inherent bias against certain groups, particularly women, disadvantaged socio-economic groups, and ethnic minorities. As a public authority, the MHRA has a public sector equality duty, requiring it to carefully consider in the training and validation of its AI tools, and evidence in the real-world application of those tools, that it has properly accounted for its equality duties in addressing this potential bias. Failing to do so could seriously impede its safety and efficacy evaluations of medicinal products, resulting in disproportionate harm to certain patient groups.
Ultimately, however, a pivotal and tangible challenge that could undermine the success of integrating AI tools into regulatory processes lies in whether the MHRA and RIO receive adequate resources and funding to achieve this aim. This investment will be vital to surmounting the significant challenges and unlocking the extraordinary potential of AI to revolutionize safe and timely access to medicinal products across the UK.
Authored by Penny Powell, James Furneaux, and Bea Watts.