Hogan Lovells 2024 Election Impact and Congressional Outlook Report
AI and machine learning technology have started to be widely adopted by (re)insurers to embrace new business opportunities. However, it is challenging to understand the level of risk they will be exposed to when developing their own AI solutions, partnering with external parties or providing new AI-related policies to their customers, partially due to the complex nature of legal, regulatory and commercial landscape in connection with AI and machine learning. This article aims to provide practical guidance on how (re)insurers can navigate this uncertain territory, develop their strategic plans and manage their legal and reputational risks.
The insurance and reinsurance industry has seen a rapid increase in the use of AI and machine learning technology in recent years. A survey from the Bank of England and Financial Conduct Authority in October 2022 suggests that 72% of UK financial services firms are developing or deploying machine learning applications.
(Re)Insurers have been thinking about AI strategically, and are either: increasingly embedding AI in their day-to-day operations to increase efficiency, enhance decision-making, reduce costs, gain insights from data and improve customer experience; or are starting to offer new insurance products or policies to their clients to protect them against AI-related claims. For instance, Munich Re has recently launched aiSelf, a coverage for users who implement self-developed AI solutions in their own companies. It is to protect companies from potential financial losses resulting from AI underperformance.
While AI and machine learning offer new opportunities to (re)insurers and help to transform the financial services sector, it is challenging for (re)insurance companies to understand and measure the actual risk of AI adoption, particularly given the complex nature of legal landscape around evolving AI regulations and existing insurance regulations (and even worse, the intersection of those regulations).
In that regard, this article aims to provide guidance on (re)insurance companies assessing the risk of AI adoption along with some risk mitigation measures, and the current legal landscape in the EU and the UK for the companies to make their strategic business decisions and manage their risks.
As a starting point, stakeholders from (re)insurers should evaluate and monitor the extent to which ethical considerations have been taken into account when developing its own AI solutions or outsourcing to third party providers. By taking the following ethical factors into account, (re)insurers can contribute to the responsible deployment of AI solutions. This will help to protect the interests of customers, build trust, and reduce their exposure to enforcement risks.
In parallel, (re)insurers will need to assess various risks (and critically evaluate whether the benefits of AI outweigh the following risks in their specific context) of designing or building AI systems, or introducing new AI-related policies to their customers. Some of the risks include:
Considering the risks outlined above, (re)insurers are encouraged to assess the potential impact on their own or their client’s business operations when developing or adopting AI systems. Additionally, the assessment should factor in the criticality of their specific operations, the sensitivity of data, financial or reputational risks upon any disruption or breach, or the value associated with their service agreements with clients.
Please refer to our article “AI regulation in financial services in the EU and the UK: Governance and risk-management” for the high-level overview of AI legal landscape in the EU and the UK. This article however will look into the insurance-specific elements of the EU AI Act and existing laws and regulations that may be applicable to (re)insurers for developing or using AI systems.
One of the biggest challenges for (re)insurers may be understanding how the emerging AI regulations such as the AI Act affect their existing or upcoming AI strategies, and knowing which existing insurance laws and regulations will be applicable to them, their partners or third party providers in the context of AI. In particular, if they utilise AI solutions from third-party vendors, their vulnerabilities may depend on the third party’s compliance with the laws and regulations.
The latest draft of the EU AI Act dated June 2023, for example, indicates that “AI systems intended to be used for making decisions or materially influencing decisions on the eligibility of natural persons for health and life insurance” will be high risk if “they pose a significant risk of harm to the health, safety or fundamental rights of natural persons”. If (re)insurer’s AI systems satisfy these conditions, they will be subject to more stringent regulatory requirements such as establishing a risk management system, use of high-quality data, conducting a risk assessment, ensuring human oversight as well as an appropriate level of accuracy, robustness, safety and cybersecurity, and conducting a conformity assessment.
However, the AI Act have implications beyond health and life insurance policies. If (re)insurers use AI systems to (i) evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud; or (ii) make inferences about personal characteristics of natural persons on the basis of biometric or biometrics-based data, with some exceptions, the obligations for high risk AI systems may be applicable to them. Additionally, it is prohibited for (re)insurers to put into service or use of AI systems for the social scoring evaluation (which is defined as evaluating or classifying natural persons based on their social behaviour, socio-economic status or known or predicted personal or personality characteristics) that is detrimental or unfair to individuals.
For AI systems that do not fall under the above conditions, and constitute low-risk AI systems, some limited obligations may still be applicable to (re)insurers including transparency requirements.
The UK government is not planning to introduce any AI-specific legislation or put AI principles on a statutory footing at least in the near future. It remains to be seen how the UK government and financial regulators will shape this evolving area of law for (re)insurers.
In the event that (re)insurers are subject to other applicable laws due to their use of AI systems or their insurance-related activities, they may be required to comply with existing laws and regulations. Some potential relevant laws depending on facts of each case may include the GDPR, Solvency II Directive, Insurance Distribution Directive, AML and CTF regulations, outsourcing regulations, cybersecurity laws and UK Consumer Duty obligations.
If (re)insurers decide to adopt, operate or embed AI solutions to their business operations, they should consider implanting the following risk mitigation measures in order to better manage their risks:
Stakeholders within the (re)insurance industry must conduct a thorough evaluation of their AI use cases, risk tolerance, business requirements, anticipated benefits and evolving regulatory landscape before embracing AI solutions or introducing new AI-related policies. Should (re)insurers opt to embrace new opportunities, they should begin formulating a plan or strategy, which may involve substantial revisions to their current practices, to align AI systems with upcoming AI regulations such as the AI Act as well as existing legal frameworks. Additionally, they should put in place risk-mitigation measures to effectively manage any potential legal, regulatory or financial risks.
Authored by Daniel Lee and John Salmon