Hogan Lovells 2024 Election Impact and Congressional Outlook Report
The European Medicines Agency (EMA) has finally joined the discussion on artificial intelligence (AI) and machine learning (ML), releasing a draft reflection paper on the use of these technologies throughout the medicinal product life cycle. Based on the premise that the deployment of AI/ML tools poses new risks in the development and use of medicines, the paper highlights the need for a “human-centric” and risk-based approach. Stakeholders have also been invited to provide input in advance of EMA’s issuance of final guidance.
During the summer, the European Medicines Agency finally joined the discussion on AI by publishing a draft reflection paper setting out considerations on the use of AI/ML throughout the medicinal product life cycle. The reflection paper seeks to outline the scientific principles to be considered at each stage of the medicinal product life cycle when AI and ML tools are used. The EMA highlights the need to follow a “human-centric approach” in the development and deployment of AI/ML in the context of the medicinal product life cycle. It also clarifies that companies should ensure compliance of AI/ML with existing applicable legal requirements, respect ethical principles and safeguard the protection of fundamental rights.
Stakeholders are invited to provide input on the draft reflection paper and identify risks and opportunities concerning the use of AI/ML in the pharmaceutical sector until 31 December 2023. The EMA intends to analyse the feedback received to finalize the reflection paper and develop additional guidance for stakeholders. In the meantime, the application of AI/ML in medicinal product development and use will be the subject of a workshop organized by the Human Medicines Agency (HMA) and EMA on 20 and 21 November 2023.
The EMA’s reflection paper is based on the premise that the deployment of AI/ML tools poses new risks in the development and use of medicines. With this backdrop, the reflection paper recommends following a risk-based approach in the development, deployment and performance monitoring of AI/ML systems used in the life cycle of medicines. In fact, the EMA encourages Marketing Authorization (MA) applicants to conduct regulatory impact assessment and risk analysis of all AI/ML applications and engage in early interactions with regulators where existing guidance does not seem to be applicable. This would allow developers of medicines to identify the risks that need to be managed throughout the life cycle of AI/ML tools. The level of risk an AI/ML tool poses may vary depending on the context of use, the technical features of the tool itself, the phase of the medicinal product life cycle in which it is deployed and the degree of impact it has over the procedure to which it is applied.
In practice, this means that Marketing Authorization Holders (MAH) and MA applicants will bear the burden of ensuring that the risks associated with the deployment of AI/ML are assessed and managed on an ongoing and systematic basis throughout the life cycle of the AI/ML. Specifically, MAH and MA applicants will be responsible for ensuring that the AI/ML tools as well as the datasets used to train such tools as well as the data processing activities involved meet the ethical, technical, scientific, and regulatory standards as described in good practice (GxP) standards and current EMA scientific guidelines. In case MAH or MA applicants foresee that the use of AI/ML could impact the benefit-risk ratio of a medicinal product, early interaction with regulatory authorities is recommended.
The use of AI/ML in the drug discovery phase does not present a high risk from a regulatory perspective. However, principles for non-clinical development should be applied to AI/ML tools when results of studies conducted during this stage will be used for regulatory review purposes.
At this stage of the medicinal product life cycle, the scope of application of Standard Operation Procedures (SOPs) must expand to cover AI/ML tools used in pre-clinical studies. Pre-clinical data generated using AI/ML applications that forms part of the evidence used to assess the benefit-risk ratio of a medicinal product should be analysed in accordance with a pre-determined analysis plan. Principles of Good Laboratory Practice (GLP) outlined in Organization for Economic Co-operation and Development (OECD) guidance, including those specific to computerised systems and data integrity, should be considered by MA applicants, where applicable.
AI/ML models used in clinical trials would be subject to the requirements set out in the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH) E6 guideline for Good Clinical Practice (GCP). In case AI/ML models are generated for clinical trials, a broad range of information concerning those models would be considered to form part of the clinical trial protocol and, as such, would be subject to comprehensive review and assessment by the relevant regulatory authority at the MA or clinical trial application stages.
Where AI/ML-enabled medical devices or in vitro diagnostic medical devices are used in the course of clinical trials, additional requirements may apply to ensure protection of rights, safety and wellbeing of trial subjects, data integrity and generalisability of trial results.
The use of AI/ML models in individualized treatment, such as patient selection, dosing, etc., would be considered high-risk from a regulatory and a patient safety perspective. Special attention should be given where treatment dosing is determined on an individual basis using AI/ML tools; in this case, the EMA proposes that MAH provide guidance to prescribers and outline alternative treatment strategies in case of technical failure of the AI/ML tools.
The EMA recommends that MA applicants implement quality review mechanisms prior to submission for regulatory review to ensure factual and syntactical accuracy where AI/ML tools are deployed to draft, compile, translate or review medicinal product information documents.
Given that the use of AI/ML in various aspects of medicinal product manufacturing is expected to increase, model development, performance assessment and life-cycle management of AI/ML tools should comply with quality risk management principles (namely ICH Q8, Q9 and Q10).
Where AI/ML is used for post-authorization (e.g., post-authorization efficacy and safety studies) or pharmacovigilance activities (e.g., adverse event report management) current good pharmacovigilance practices should be observed. The MAH would be responsible for validating, monitoring, documenting the model performance of AI/ML tools and including AI/ML operations in the pharmacovigilance system. In case the MA is conditional on post-authorization studies and use of AI/ML tools is foreseen in such studies, the use of AI/ML should be discussed and agreed upon with regulatory authorities during the regulatory review phase of the MA application.
MA applicants must document the sources of data used to train AI/ML algorithms as well as the processes used to collect data in accordance with GxP requirements, allowing for traceability of data. One of the main reasons underpinning such obligation is to ensure that training datasets are free of human bias.
The EMA clarifies that validation in the context of AI/ML and in medicinal product development is not the same. Validation in the context of ML “refers to the data used to inform the selection of model architecture and hyperparameter tuning”. EMA recommends conducting “an early train-test split” of data.
MA applicants and MAH must ensure that AI/ML models are generalizable and robust - especially in situations where models cannot be updated during deployment. They must also ensure that documentation regarding model development is traceable and allows for secondary assessment of development practices.
The EMA underscores the importance of choosing the appropriate metrics for performance assessment of AI/ML models. Pre-determined thresholds for performance metrics that take into account the context of use of AI/ML models increase the credibility of model performance.
Although transparent AI/ML models are the preferred option in medicinal product development, the EMA would allow the use of so-called “black box” models that are inherently less transparent and interpretable, if evidence shows that the former underperform or are not as robust as the latter. Where black box models are used, details on model architecture, training metrics, validation and test results and a monitoring and risk management plan to manage lack of transparency should be provided. The use of black box models should be discussed in the context of EMA’s qualification or scientific advice procedure.1
Similarly to AI/ML development, the deployment of AI/ML tools should also follow a risk-based approach. AI/ML models that are considered high-risk would require re-evaluation of their performance where the software or hardware supporting the AI/ML model undergoes significant changes. Monitoring mechanisms that facilitate early detection of model degradation must be established and thresholds for acceptable model performance must be determined. A risk management plan outlining potential risks of model failure and related monitoring and mitigation strategies must be drawn up, particularly when AI/ML models are used without human oversight.
MA applicants and MAH bear the responsibility of ensuring that the processing of personal data by AI/ML models is conducted in accordance with applicable data protection legislation and in line with principles outlined therein. EMA recommends conducting a risk assessment of the AI/ML system, including a necessity and proportionality assessment.
Ethical principles outlined in the guidelines for trustworthy AI and presented in the Assessment List for Trustworthy Artificial Intelligence for self-assessment (ALTAI) must be observed at all stages of the medicinal product life cycle. These include human agency and oversight, privacy and data governance, transparency, accountability, diversity, etc.
In a recently published LinkedIn post, Emer Cooke, the Executive Director of EMA, urged interested stakeholders to share their opinion on the draft reflection paper by 31 December 2023. She also underscored the need for the application of AI and ML in medicinal product development and deployment to be in line with ethical principles and respect human rights.
The timing of publication of the draft reflection paper is particularly interesting, as the proposed AI Act that is set to be the first piece of legislation regulating AI systems is currently at the stage of inter-institutional trialogue negotiations between the European Parliament and the Council of the European Union and is expected to be adopted in the very near future. The draft reflection paper appears to mirror the risk-based approach and the human-centric focus that the proposed AI Act envisages.
Our team will continue to monitor the developments around the use of AI and ML throughout the medicinal product life cycle. Please contact the authors or the Hogan Lovells attorneys with whom you regularly work with any questions.
Authored by Anastasia Vernikou and Fabien Roy.