Hogan Lovells logo
  • Our people
  • What we do
    Sectors Practices Legal Tech
    • Aerospace and Defense
    • Automotive and Mobility
    • Consumer
    • Education
    • Energy
    • Financial Institutions
    • Insurance
    • Life Sciences and Health Care
    • Manufacturing and Industrials
    • Private Capital
    • Real Estate
    • Sports, Media and Entertainment
    • Technology
    • Transportation and Logistics
    • Corporate & Finance
    • Disputes
    • Intellectual Property
    • Regulatory
  • Case studies
  • Our thinking
    • All Our thinking
    • Comparative guides
    • Digital Client Solutions
    • Events and webinars
    • Podcasts
    News image_2

    Reflecting on President Trump’s first 100 days in office

  • ESG
  • Careers
Search Search
close
Search Search Search
lang-sel-icon English
  • Deutsch
  • English
  • Español
  • Français
  • 日本語
  • 中文
False
people-new
Mobile area
  • About us
    • Overview
    • Our history
    • Global management team
  • Where we are
    • Our locations
    • Law Firm Network
  • Media center
    • Media contacts
    • Press releases
    • Awards & rankings
  • Responsible Business
  • HL Inclusion
  • Alumni
LinkedIn
Youtube
twitter
Wechat
News

Artificial intelligence in the insurance sector: fundamental right impact assessments

01 April 2024
Image
Image
wechat x linkedin
hogan-lovells-logo
Share by email
Enter email
Enter Subject
Cancel
Send
News
Artificial intelligence in the insurance sector: fundamental right impact assessments
Chapter
  • Chapter

The new AI Act establishes an obligation for the deployers of certain high-risk AI systems to conduct a “fundamental rights impact assessment” (“FRIA”). This will have a high impact on insurance companies that use AI systems for risk assessment and pricing in relation to life and health insurance products and also for creditworthiness purposes.

Brief Intro to the AI Act

Recently, the EU Parliament approved the Artificial Intelligence Regulation (“AI Act”). This regulation (estimated to be finally approved around April) aims to regulate the development, deployment and use of the AI systems within the EU. The regulation categorizes these systems into different risk levels, imposing stricter obligations on higher risk AI systems. The AI Act also prohibits certain uses of AI (e.g., systems designed to manipulate behaviour or exploit vulnerabilities).

Insurance companies making use of AI systems will have to comply with several obligations. For instance, they will need to have in place an AI governance program, affix the CE mark, comply with transparency obligations… and, in some cases, conduct a FRIA.

What is a FRIA?

A FRIA is an assessment that deployers* must conduct before deploying for the first time some high-risk AI systems. The aim of the FRIA is for the deployer to identify specific risks to the fundamental rights of people affected and the measures to be taken if those risk materialize. This assessment must include:

  • a description of the deployer’s processes in which the high-risk AI system will be used in line with its intended purpose and a description of the period of time during which each high-risk AI system is intended to be used;
  • the categories of natural persons and groups likely to be affected by its use in the specific context;
  • the specific risks of harm likely to impact the categories of persons or groups of persons identified pursuant the previous bullet, taking into account the information given by the provider;
  • a description of the implementation of human oversight measures, according to the instructions for use, and the measures to be taken where those risks materialise, including the arrangements for internal governance and complaint mechanisms.

Who has to perform a FRIA and when?

The FRIA has to be performed by the deployers of some high-risk AI systems, among others (i) AI systems that evaluate the creditworthiness of natural persons or establish their credit score (except for the systems used for detecting financial fraud); and (ii) AI systems used for risk assessment and pricing in relation to natural persons for life and health insurances.

Therefore, the AI Act will have a major impact in the insurance sector, due to the fact that the companies operating in this area may use this kind of systems for their daily activities. There is no doubt that AI can be really helpful for calculating life and health insurances premiums, but these companies must also balance the fundamental rights of individuals. In fact, in the AI Act, banking and insurance entities are named as examples of companies that should carry out a FRIA before implementing this kind of AI systems.

Although the FRIA needs to be performed only before deploying the system for the first time, the deployer needs to update any element that changes or is no longer up to date. Also, in similar cases, the deployer can rely on previously conducted FRIAs or existing impact assessments carried out by the provider of the system. In addition, the FRIA could be part of and complement a data protection impact assessment (“DPIA”) under the Regulation 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation).

Does this assessment have to be notified to any authority?

Yes, the deployer has to notify the market surveillance authority of its results (in Spain, the Statute of the Spanish Artificial Intelligence Supervisory Agency has already been approved). Along with this, a questionnaire will have to be completed through an automated tool, which will will be developed by the AI authority.

How should FRIAs be carried out in practice?

Depending on the structure of the AI obligations in the company, there are several options. The one that could make more sense for insurance companies is to carry out the FRIA together with the DPIA, as there may be many synergies to leverage. This way the data protection officer and the privacy team could also be involved.

In addition, insurance companies already have in place procedures to carry out DPIAs. Integrating FRIAs as part of the same process could be less problematic and involve less resources.

Finally, FRIAs should be aligned with the AI governance program of the insurance company. Very often the risks for individuals (e.g. the existence of biases or discrimination) would be already covered by the AI governance program.

When should insurance companies start carrying out FRIAs?

Even though the “formal” obligation will be applicable in a couple of years, the sooner the FRIAs process is ready the better. This way the impact on the implementation of the AI Act would be smoother and the company will be in a position to demonstrate compliance.

Next steps

  • Insurance companies should start creating the internal procedure to implement and validate FRIAs (potentially, integrating it with DPIA process).
  • FRIA process should also be aligned with the content of the AI governance program.
  • Insurance companies should identify the use of AI systems that would require a previous FRIA.

 

*Deployer is the natural or legal person using an AI system under its authority for a professional activity, which may be different from the developer or distributor of the system.

 

 

Authored by Gonzalo F. Gállego, Juan Ramón Robles, and 

Contacts

bio-image

Gonzalo F. Gállego

Partner

location Madrid

email Email me

bio-image

Juan Ramón Robles

Senior Associate

location Madrid

email Email me

View more

More on this topic

image1
News

The EU AI Act: an impact analysis (part 1)

01 February 2024

image1
News

The EU AI Act: an Impact Analysis (part 2)

14 March 2024

View more

left_arrow
right_arrow

News

Download Publication Card 1

left_arrow
right_arrow

Related topics

  • Digital Transformation
  • Artificial Intelligence
  • Data, Privacy and Cybersecurity
Load more

Related countries

  • Spain
Load more

Related keywords

  • Artificial Intelligence
  • AI
  • system
  • high-risk
  • risk
  • fundamental right
  • impact
  • assessment
  • FRIA
  • DPIA
  • privacy
  • data protection
  • governance
Load more

Articles you may be interested in

image_1
Insights and Analysis

Decoding the EU Data Act: Data types covered by data access and sharing rights

30 October 2024

image_1
Insights and Analysis

European Parliament Elections 2024: Explainer and Policy Takeaways

04 June 2024

image_1
Insights and Analysis

Life Science Law Update – Key developments for pharma & device companies in EU - May 2024

08 May 2024

image_1
News

UK MHRA Publishes AI Regulatory Strategy

01 May 2024

image_1
News

2024 Life Sciences & Health Care Horizons

18 April 2024

image_1
News

The EU AI Act: an Impact Analysis (part 2)

14 March 2024

image_1
News

Spanish DPA´s guidelines for humanizing automated decisions

11 March 2024

image_1
News

The EU AI Act: an impact analysis (part 1)

01 February 2024

image_1
Insights and Analysis

Life Science Law Update – Key developments for pharma & device companies in EU and EU Big Five January 2024

31 January 2024

left_arrow
right_arrow

View more insights and analysis

arrow
arrow
"" ""
Digital Client Solutions
Empowering you to lead change through our digital solutions.
Learn more

Register now to receive personalized content and more!

 

Register
close
See benefits
Register
Hogan Lovells logo
Contact us
Quick Links
  • About us
  • Careers
  • Case studies
  • Contact us
  • HL Inclusion
  • Our people
  • Our thinking
  • Responsible Business
  • Cookies
  • Disclaimer
  • Fraudulent and Scam Emails
  • Legal notices
  • Modern Slavery Statement
  • Our thinking terms of use
  • Privacy
  • RSS
Connect with us
LinkedIn
Youtube
Twitter
Wechat
Stay in the know

© 2025 Hogan Lovells. All rights reserved. "Hogan Lovells" or the “firm” refers to the international legal practice that comprises Hogan Lovells International LLP, Hogan Lovells US LLP and their affiliated businesses, each of which is a separate legal entity. Attorney advertising. Prior results do not guarantee a similar outcome.

Subscribe to Our thinking
Connect with us
LinkedIn
Youtube
Twitter
Wechat