News

AI in German Employment – Navigating the AI Act, GDPR, and National Legislation

Image
Image

Employers around the world are increasingly using artificial intelligence (AI) to optimize many facets of their business operations, ranging from screening job applications and assigning tasks in real time to evaluating employee performance and making decisions about promotions and terminations. While AI offers compelling advantages, particularly in terms of efficiency and cost-effectiveness, employers, especially those in Germany, must navigate an increasingly complex regulatory landscape that encompasses emerging international standards and national legislation.

Setting The Scene: The Upcoming European AI Act

The regulation of AI in use within the European Union (EU) begins at international level. Following extensive negotiations, European bodies have reached an agreement on the first comprehensive legal framework for AI, with significant implications for companies operating in the EU. As final technical details are being ironed out, it is prudent for companies operating in Germany to acquaint themselves with the forthcoming legal parameters outlined in the AI Act, particularly concerning the use of AI in employment contexts.

Focusing on the “AI system”

The AI Act centers on the concept of the “AI system”, defined as

  • a machine-based system
  • designed to operate with varying levels of autonomy and
  • that may exhibit adaptiveness after deployment and
  • that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments (Art. 3 Para. (1) AI Act).

The definition deliberately narrows the scope of the AI Act compared to the EU Commission’s earlier proposal, which covered almost all software systems. This refined definition aims to capture the essential attributes of AI systems, such as the capability for inference, operation without human intervention, and self-learning capabilities. This distinction distinguishes AI systems from traditional software, which typically follows predetermined rules defined by humans (Recital 6 AI Act).

Appropriate risk assessment – The risk-based approach

Considering the diverse areas of application of AI and the varying levels of intensity and associated risks, the AI Act adopts a risk-based approach, categorizing AI systems as presenting “unacceptable”, “high”, or “low/minimal” risks (Recital 14 AI Act). In accordance with this approach, AI systems posing an unacceptable risk are explicitly prohibited by law (Art. 5 AI Act). Employers need to be aware of AI systems that are likely to pose unacceptable risks, such as those that use manipulative techniques or social scoring methods, as non-compliance may result in significant fines.

In the context of employment, most AI software is likely to fall into the high risk category. Employers should consult the annex to the AI Act for specific guidance, as legislators have outlined certain use cases that are considered to be high risk. These include:

  • AI systems intended to be used for recruitment or selection of natural persons, notably to place targeted job advertisements, to analyze and filter job applications, and to evaluate candidates;
  • AI intended to be used to make decisions affecting terms of work-related relationships, promotion, and termination of work-related contractual relationships, to allocate tasks based on individual behavior or personal traits or characteristics and to monitor and evaluate performance and behavior of persons in such relationships.
Far-reaching obligations when using high risk AI systems

While providers of high risk AI systems are bound by comprehensive conformity obligations, employers utilizing such systems within the employment context must adhere to the obligations outlined for deployers in the AI Act (Art. 29 AI Act). This entails implementing technical and organizational measures to comply with instructions for use provided by AI system providers. Moreover, employers are required to assign human oversight to a competent individual, to inform workers representatives and affected individuals of the use of high risk AI systems prior to putting into service, and, without prejudicing other legal obligations, to inform affected individuals if AI is used in decision-making processes. In addition, affected individuals may also be entitled to request from the employer an explanation of decisions assisted by AI systems.

Data Protection Limits of AI-Based Decision-Making

Employers contemplating the use of AI in the context of decision-making must also take into account the provisions of the General Data Protection Regulation (GDPR). According to the GDPR, employees have a “right” not to be subject to a decision based solely on automated processing, which produces legal effects concerning them or similarly significantly affects them (Art. 22 Para. (1) GDPR). This right is recognized as a prohibition in principle, which, among other things, prohibits the automatic rejection of job applications, AI-driven assignment, or the automatic termination of employment relationships without human intervention.

Preparatory processes in the realm of the GDPR

The prohibition of automatic decision-making, however, does not preclude employers from utilizing AI in the preparation phase of the decision-making. To comply with the GDPR, employers must ensure a sufficient level of human intervention throughout the decision-making process. In this regard, it is highly recommended that employers critically assess any AI-generated output before reaching a final decision.

Deviating from this principle, and contrary to prevailing legal commentaries, the European Court of Justice (CJEU) recently issued a landmark decision that significantly extends the scope of Art. 22 GDPR to include seemingly preparatory measures that are carried out automatically (CJEU, C-634/21). The CJEU reasoned that a pure credit score, automatically generated by a credit information agency, qualifies as automatic decision-making under the GDPR. This is because, in the given case, the agency’s contractual partners relied heavily on the score provided by the agency to make decisions on loan applications. The CJEU’s expansive interpretation of the prohibition of automatic decision-making is likely to impact all sectors relying on AI for decision-making processes. However, given the diversity of decisions within the employment context, it remains uncertain to what extent the courts will regard mere preparatory measures as automatic decision-making in said context.

AI-based decisions in exceptional cases

The CJEU’s extension of the scope of Art. 22 GDPR highlights the statutory exceptions to the prohibition of automatic decision-making. According to the GDPR, automatic decision-making may be permitted in three scenarios: (i) if the decision is necessary for entering into, or performance of, the employment contract; (ii) if the decision is authorised by national law; or (iii) if the decision is based on the employee’s explicit consent. In the absence of specific German national law authorizing automatic decision-making in particular cases, practical considerations center around the “necessity”-test. In the absence of established case law, best practice dictates that automatic rejection of job applications may only be justified in extreme circumstances, while automatic allocation of tasks may be permissible, depending on the individual circumstances of each case.

Works Council Involvement When Implementing AI Systems

Under German law, the implementation of AI systems in the employment context also requires compliance with specific AI regulations outlined in the Works Constitution Act. The Works Constitution Act essentially mandates the involvement of the works council in, among other things, social and personnel matters. This includes the works council’s right of co-determination regarding the introduction and use of technical devices designed to monitor the employees’ behavior or performance, which regularly extends to the implementation of AI systems.

In the field of AI, the works council has additional rights, including the right to seek expert advice where the works council has to assess the introduction or use of AI to carry out its tasks. Employers are advised to acknowledge the works council’s rights from the outset to facilitate the implementation process, not only from a legal perspective but also from a technical one. Collaboration with experts can provide crucial insight into AI capabilities and challenges.

Conclusion: Preparing For AI On Multiple Fronts

Companies considering the (further) integration of AI into their employment practices must navigate a complex regulatory landscape spanning EU regulations such as the AI Act and the GDPR, as well as national legislation. While their effectiveness has yet to be fully determined, it is prudent for employers to review their procedures, to embrace emerging guidance and to take proactive steps to ensure compliance with the evolving regulatory framework.

 

 

Authored by Dr. Justus Frank, Maître en droit, LL.M.

Search

Register now to receive personalized content and more!