Hogan Lovells 2024 Election Impact and Congressional Outlook Report
The role that artificial intelligence (“AI”) plays in our everyday lives continues to grow. As a result, so do the proposals surrounding how to regulate AI as well as its relationship with products and end users.
In this part 1 article, we take a look at what mandatory requirements businesses involved in the supply of AI products can begin to expect in the EU and the UK. We follow-up in our next article with a comparative analysis for France, the Netherlands and the US, as well as with some ideas of what businesses can start to do to prepare for (the inevitable) regulation of AI, sooner rather than later.
The European Commission first released its proposal for a Regulation on Artificial Intelligence (the “AI Act”) on 21 April 2021. It is intended to be the first legislation setting out harmonised rules for the development, placing on the market, and use of AI in the European Union. The exact requirements (that mainly revolve around data quality, transparency, human oversight and accountability) depend on the risk classification of the AI in question, which ranges from a high to low and minimal risk, while a number of AI uses are prohibited outright. Given that the AI Act is expected to be a landmark piece of EU legislation that will have extraterritorial scope and will be accompanied with hard hitting penalties (including potential fines of up to €30 million or 6% of worldwide annual turnover), we have been keeping a close eye on developments.
The latest development occurred on 11 May 2023, with Members of the European Parliament (“MEPs”) committees voting in favour of certain proposed amendments to the original text of the AI Act. Some of the key amendments include:
General AI principles: New provisions containing “general” AI principles have been introduced. These are intended to apply to all AI systems, irrespective of whether they are “high-risk”, thereby significantly expanding the scope of the application of the AI Act. At the same time, MEPs expanded the classification of high-risk uses to include those that may result in harm to people’s health, safety, fundamental rights or the environment. Particularly interesting is the addition of AI in recommender systems used by social media platforms (with more than 45 million users under the EU’s Digital Services Act) to the high-risk list.
Prohibited AI practices: As part of the amendments, MEPs substantially amended the “unacceptable risk / prohibited list” to include intrusive and discriminatory uses of AI systems. Such bans now extend to a number of uses of biometric data, including indiscriminate scraping of biometric data from social media to create facial recognition databases.
Foundation models: While past versions of the AI Act have predominantly focused on 'high-risk' AI systems, MEPs introduced a new framework for all foundation models. Such framework, (which would, (among other things), require providers of foundation models to guarantee robust protection of fundamental rights, health and safety and the environment, democracy and rule of law), would particularly impact providers and users of generative AI. Such providers would also need to assess and mitigate risks, comply with design, information and environmental requirements and register in the applicable EU database, while generative foundation models would also have to comply with additional transparency requirements.
User obligations: 'Users' of AI systems are now referred to as 'deployers' (a welcome change given that the previous term somewhat confusingly was not intended to capture the ‘end user’). This change means ‘deployers’ become subject to an expanded range of obligations, such as the duty to undertake a wide-ranging AI impact assessment, while on the other hand, end user rights are boosted, with end users now being conferred the right to receive an explanation about decisions made by high-risk AI systems.
The next step, plenary adoption, is currently scheduled to take place in June 2023. Following this, the proposal will enter the last stage of the legislative process, and negotiations between the European Council and the European Commission on the final form of the AI Act will begin.
However, even if these timelines are adhered to, the traction that AI regulation has been receiving in recent times may mean that the EU’s AI Act is not the first ever legislation in this area. Before taking a look at the developments in this sphere occurring in the UK, let’s consider why those involved in the supply of products need to have AI regulation on their radar in the first place.
The uses of AI are endless. Taking inspiration from a report issued by the UK’s Office for Product Safety and Standards last year, we see AI in the product development space as having the potential to lead to:
Safer product design: AI can be used to train algorithms to develop only safe products and compliant solutions.
Enhanced consumer safety and satisfaction: Data collected with the support of AI can allow manufacturers to incorporate a consumer’s personal characteristics and preferences in the design process of a product, which can help identify the product’s future use and ensure it is designed in a way conducive to this.
Safer product assembly: AI tools such as visual recognition can assist with conducting quality inspections along the supply chain, ensuring all of the parts and components being assembled are safe - leaving little room for human error.
Prevention of mass product recalls: Enhanced data collection via AI during industrial assembly can enable problems which are not easy to identify through manual inspections to be detected, thereby allowing issue-detection before products are sold.
Predictive maintenance: AI can provide manufacturers with critical information which allows them to plan ahead and forecast when equipment may fail so that repairs can be scheduled on time.
Safer consumer use: AI in customer services can also contribute to product safety through the use of virtual assistants answering consumer queries and providing recommendations on safe product usage.
Protection against cyber-attacks: AI can be leveraged to detect, analyse and prevent cyber-attacks that may affect consumer safety or privacy.
On the other hand, there are risks when using AI. In the products space, this could result in:
Products not performing as intended: Product safety challenges may result from poor decisions or errors made in the design and development phase. A lack of “good” data can also produce discriminatory results, particularly impacting vulnerable groups.
AI systems lacking transparency and explainability: A consumer may not know or understand when an AI system is in use and taking decisions, or how such decisions are being taken. Such lack of understanding can in turn affect the ability of those that have suffered harm to claim compensation given the difficulty in proving how the harm has come about. This is particularly a concern given product safety has traditionally envisaged risks to the physical health and safety of the end users while AI products pose risks of immaterial harms (such as psychological harm) or indirect harms from cyber security vulnerabilities.
Cyber security vulnerabilities being exploited: AI systems can be hacked and/or lose connectivity which may result in safety risks e.g. if a connected fire alarm loses connectivity, the consumer may not be warned if a fire occurs.
Currently, there is no overarching piece of legislation regulating AI in the UK. Instead, different regulatory bodies (e.g. the Medicines and Healthcare products Regulatory Agency, the Information Commissioner’s Office etc.) oversee AI use across different sectors, and where relevant, provide guidance on the same.
In September 2021 however, the UK government announced a 10-year plan, described as the ‘National AI Strategy’. The National AI Strategy aims to invest and plan for the long-term needs of the AI ecosystem, support the transition to an AI-enabled economy and ensure that the UK get the national and international governance of AI technologies right.
More recently, on 29 March 2023, the UK Government published its long-anticipated artificial intelligence white paper. Branding its proposed approach to AI regulation as “world leading” in a bid to “turbocharge growth”, the whitepaper provides a cross-sectoral, principles-based framework to increase public trust in AI and develop capabilities in AI technology. The five principles intended to underpin the UK’s regulatory framework are:
1. Safety, security and robustness;
2. Appropriate transparency and explainability;
3. Fairness;
4. Accountability and governance; and
5. Contestability and redress.
The UK Government has said it would avoid "heavy-handed legislation" that could stifle innovation which means in the first instance at least, these principles will not be enforced using legislation. Instead, responsibility will be given to existing regulators to decide on "tailored, context-specific approaches" that best suit their sectors. The consultation accompanying the white paper is open until 21 June 2023.
However, this does not mean that no legislation in this arena is envisaged. For example:
On 4 May 2023, the Competition and Markets Authority (the “CMA”) announced a review of competition and consumer protection considerations in the development and use of AI foundation models. One of the intentions behind the review is to assist with the production of “guiding principles” for the protection of consumers and support healthy competition as technologies develop. A report on the findings is scheduled to be published in September 2023, and whether this will result in legislative proposals is yet to be seen.
The UK has, as of late, had a specific focus on IoT devices, following the passage of the UK’s Product Security and Telecommunications Infrastructure Act in December 2022 and its recent announcement that the Product Security and Telecommunications Infrastructure (Product Security) Regime will come into effect on 29 April 2024. While IoT and AI devices of course differ, the UK’s willingness to take a stance as a world leader in this space (being the first country in the world to introduce minimum security standards for all consumer products with internet connectivity) may mean that a similar focus on AI should be expected in the near future.
Our Global Products Law practice is fully across all aspects of AI regulation, product safety, compliance and potential liability risks. In part 2 of this article, we look to developments in France, the Netherlands and the US and share our thoughts around what businesses can do to get ahead of the curve to prepare for the regulation of AI around the world.
Authored by Valerie Kenyon, Magdalena Bakowska, and Vicki Kooner.