Hogan Lovells 2024 Election Impact and Congressional Outlook Report
The EU AI Act is on the verge of becoming the first law to specifically govern the use of artificial intelligence at a European level. In light of the extensive set of new obligations imposed on companies, and considering the broad extra-territorial scope of the regulation, businesses in all sectors are required to adequately prepare for the new law. Part 1 of our analysis of the AI Act detailed the AI Act’s core concepts and requirements, and a forecast for its impact on businesses. In this part 2, we walk you through more detailed considerations on how to implement the specific new requirements of the AI Act, and the necessary steps companies should consider for building an appropriate AI governance.
Since part 1 of this series, the European Parliament's committees on Internal Market and Consumer Protection ("IMCO") and on Civil Liberties, Justice and Home Affairs ("LIBE") endorsed the AI Act on 13 February 2024. On 13 March 2024, the European Parliament finally adapted the consolidated AI Act. Before the AI Act can now be signed and published in the European Official Journal to become effective, the EU Council will have to formally adopt it at the ministerial level as well.
With this update in mind, and having provided an analysis of the AI Act’s core concepts in part 1 we will continue in this part with our analysis of the AI Act’s impact on businesses and how they can best prepare for compliance.
To ensure compliance and avoid regulatory risks, it is essential for businesses to swiftly and meticulously evaluate the impact of the AI Act – and to prepare for the transformation that AI will bring to every aspect of the business operations. In this respect, preparations for the AI Act should be seen as part of the overall governance measures that a company must take to address the develpment and deployment of AI.
While the AI Act predominantly focusses on high-risk use-cases, with supplementary rules for general purpose AI ("GPAI"), it would be wrong to focus solely on these types of systems. Companies should generally evaluate the use of any AI within their company, and compliance needs to be considered also from the perspective of the existing legal framework, with privacy, IP, consumer protection and anti-discrimination laws applying to a broader range of systems.
Equally, the rapid development of AI technologies and their use in every organization underlines the need for robust governance structures in any company.
In this article, we will therefore examine how AI governance can be approached in practice.
To ensure and be able to demonstrate compliance with the new requirements under the AI Act, the potential implications of the regulation will need to form part of every company’s overall AI governance program.
AI governance forms part of the digital governance that needs to be implemented within every business, which requires an interdisciplinary approach taking into account various factors, including legal, ethical, risk-management, strategic, practical and other considerations, and which overlaps, and is closely intertwined, with the organization’s data governance program.
We have outlined below a four step approach for building a robust AI governance program:
First of all, organizations should determine and document in a central AI inventory and repository which AI systems, and models it has developed and/or deploys or uses. The inventory should include various information, such as the nature of the technology, intended purposes, types of outputs generated by the system, relevant data processed, and any third party vendors involved.
There should be a clear allocation of responsibilities for the oversight and management of the usage of each AI system during its lifecycle.
The documentation should also consider the territorial scope of application, the intended use cases and interfaces to other systems, and the relevant business relationships with AI suppliers or deployers (including documentation of contractual arrangements).
The necessary mapping and preparation does also have a jurisdictional component, and requires to identify relevant applicable legislation and regulatory guidance in the relevant territorial and material scope of application.
The AI mapping and inventory is a living repository, as AI systems, business relationships and use case scenarios constantly evolve over time. The AI Act, as other laws, classifies high-risk systems according to their intended use, so that it is necessary to track the exact use of AI technology (which often, and not only in case of GPAI, provides for broad possibilities of different uses within a company). Therefore, it is essential to implement appropriate procedures within a company to review and update the inventory on a regular basis.
Even if a company does not use AI systems, it should be aware that its contract partners might do so. Accordingly, it should map out its business’s sensitivities and include appropriate safeguards within their vendor and business partner due diligence, to identify whether partners are using, or planning to use, AI and, if so, what guardrails they have implemented or are planning to implement.
The next steps include:
The Applicability & Impact Analysis involves the assessment how the related legal and regulatory landscape for AI in the specific jurisdiction and industry (including the AI Act, sector specific laws, and general laws, such as in the area of IP/trade secrets and data protection) applies to and impacts the specific business of your company.
Within the scope of the AI Act, this requires an appropriate classification of AI systems and scoping of intended use cases. To the extent the AI Act is applicable, business should assess what relevant risk category and set of obligations its specific uses of AI fall into. As we laid out in part 1 the AI Act categorizes AI systems according to the risks of their capabilities and utilization, and allocates respective sets of obligations, with the risk categories being:
An additional set of obligations applies to providers of GPAI (in particular, where the AI triggers systemic risks).
The Compliance Gap and Risk Analysis is an essential step in identifying and managing business risks.
Based on the findings from step 2, businesses should determine their overall business strategies on how to integrate the requirements and necessary compliance steps into their broader goals and values as well as already existing structures and procedures, and how to build an effective governance program that ensures compliance with legal requirements while supporting the business objectives.
Steps to consider are:
Determining organizational structure, roles and responsibilities. One of the current challenges for businesses is how to determine the appropriate organizational structure for building an adequate AI governance within their organizations. The diverse nature of the various topics emerging when offering, deploying or using AI require an interdisciplinary and cross-functional coordinated approach across the organization. This will imply developing the overall organizational framework, assigning new roles to certain positions or create entirely new functions, assigning clear responsibilities to these roles, establishing reporting and coordination mechanisms, and setting up cooperation for each stage of the AI lifecycle and the different responsibilities connected to these stages.
Developing policy framework, standards and procedures. This includes determining the policies, standards, and procedures required within an organization not only to ensure, and be able to demonstrate, compliance with the legal requirements under applicable laws, including the AI Act, but also achieve the relevant business objectives, and to protect company interests and assets. This policy framework needs to be developed in light of various legal, ethical and other considerations, and be aligned with company policies and requirements in various other areas, such as data governance, data protection, intellectual property, protection of trade secrets, competition law, IT- and cybersecurity, risk management, and various others.
Implementing technical measures and organizational procedures. This step requires establishing technical and organizational measures to ensure effective execution of the respective governance framework within the organization. In particular, the AI Act obliges e.g. providers of high-risk AI systems to implement technical measures concerning traceability, transparency, human oversight, data governance, cybersecurity and robustness. Organizational procedures must also be established. For high-risk AI systems, these procedures are required in particular in context of risk management, technical documentation, quality management, conformity assessment, testing and monitoring, incident reporting and registration.
Furthermore, the effective, binding and enforceable internal implementation of the governance program is needed. This requires inter alia the management buy-in (tone from the top), mechanisms for ensuring a binding nature of policies, standards and procedures, appropriate training and human oversight, ensure AI literacy, and regular monitoring and controls (including potential sanctioning of misconduct within the company).
In particular, appropriate training forms an essential component of an effective internal compliance program. The AI Act requires businesses to establish “AI literacy” among staff and other persons dealing with the operation and use of AI systems on its behalf. AI literacy is understood to refer to skills, knowledge and understanding that allows providers, deployers and affected persons, to make informed decisions relating to the development and deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.
Building and maintaining compliance documentation. To be able to demonstrate compliance with applicable legal requirements, international standards and internal policies, the AI governance program requires appropriate documentation of compliance measures and standards implemented within the organization. The AI Act, in particular, specifies certain documentation to be established and maintained by businesses falling under the scope of the Regulation.
In general, components of an appropriate documentation can entail:
First, this means implementing regular audits and controls to review, update and improve the company’s governance program, including the effectiveness of policies, standards and procedures.
Second, the regulatory landscape must be continuously monitored; new laws, jurisprudence, codes, regulatory guidance and good practice standards are emerging on all ends and may bring
new requirements and obligations which require to adapt and refine the company’s governance system
Last but not least, businesses should be aware of the fact that at international level, the EU institutions will continue to work with multinational organizations, including the Council of Europe (Committee on Artificial Intelligence), the EU-US Trade and Technology Council (TTC), the G7 (Code of Conduct on Artificial Intelligence), the Organisation for Economic Collaboration and Development ("OECD") (Recommendation on AI), the G20 (AI Principles), and the UN (AI Advisory Body), to promote the development and adoption of rules beyond the EU that will have to be aligned with the requirements of the AI Act.
Authored by Leopold von Gerlach, Martin Pflüger, Nicole Saurin, Stefan Schuppert, Jasper Siems, and Dan Whitehead.