News

The EU AI Act: Prohibited practices and AI literacy requirements take effect

Data Center, Male IT Specialist Walks, Row of Operational Server Racks, Laptop, Maintenance. Cloud Computing, Artificial Intelligence, Supercomputer, Cybersecurity
Data Center, Male IT Specialist Walks, Row of Operational Server Racks, Laptop, Maintenance. Cloud Computing, Artificial Intelligence, Supercomputer, Cybersecurity

On 2 February 2025, the first provisions of the EU’s groundbreaking AI Act started to apply. These provisions include a range of AI-related practices that are now prohibited and a duty on companies to introduce AI literacy into their organisation, through appropriate training and awareness programmes.

This marks the first of four key milestones in the Act’s implementation. With the next application date coming on 2 August 2025, through the introduction of requirements on the providers of general-purpose AI models.

Prohibited AI practices

In the majority of cases, Article 5 of the AI Act prohibits the sale, deployment and use of AI systems for certain purposes. These practices are considered to have the potential to result in such significant levels of harm to the fundamental rights or safety of individuals, that legislators considered that they should no longer be permitted.

There are eight prohibitions in total, including:

  1. Subliminal, manipulative and deceptive systems. Deployment of subliminal techniques, or manipulative and deceptive techniques that have the objective or effect of materially distorting the behaviour of individuals, causing (or reasonably likely to cause) significant harm.
  2. Exploiting vulnerabilities. An AI system that exploits vulnerabilities of individuals due to their age, disability or specific social or economic circumstances. The objective or effect being to materially distort the behaviour of individuals, causing (or reasonably likely to cause) significant harm.
  3. Social scoring. The evaluation of individuals over time based on their social behaviour or personality, with the social score leading to detrimental treatment that is unjustified or disproportionate, or unrelated to the context in which the data was originally collected.
  4. Crime prediction. The so-called ‘Minority Report’ prohibition, that applies to systems that seek to evaluate and predict the risk of an individual committing a future criminal offence, based on their personality or their wider profile.
  5. Facial recognition databases. Using AI systems to perform the untargeted scraping of facial images from the internet or CCTV, in order to create or expand facial recognition databases.
  6. Emotion recognition. Where an AI system infers the emotions of an individual in a workplace or educational environment.
  7. Biometric categorisation. Meaning AI systems used to assign individuals to specific categories on the basis of their biometric data, in order to infer certain protected characteristics (e.g. ethnicity, sexual orientation, religion etc).
  8. Facial recognition for crime prevention. Specifically the use of AI systems for biometric identification in publicly accessible spaces for the purposes of law enforcement, subject to certain exceptions.

The prohibitions apply to any organisations that make those systems available to other organisations or end-users in the EU (i.e. as a provider) or use the AI systems for any of the practices outlined above (i.e. as a deployer). Equally, given the wide extra-territorial scope of the Act, AI systems that are wholly developed and operated from outside the EU are still in scope, if the outputs impact EU-based individuals.

AI literacy

While the application of the prohibited practices is relatively narrow, the AI literacy obligation in Article 4 of the Act is potentially much broader.

It arguably requires all providers and deployers of any AI systems (irrespective of their risk classification level) to take measures to ensure, to their best extent, a sufficient level of AI literacy. This standard applies to any of the organisation’s staff and other personnel that are dealing with the operation and use of AI systems on their behalf. This suggests that third party vendors and contractors could also be in-scope.

AI literacy is defined to include the skills, knowledge and understanding that are required to make an informed deployment of AI systems and gain awareness of the opportunities, risks and potential harms that these systems can cause. 

Enforcement and penalties

Supervision and enforcement rest with the individual EU Member States. Each Member State is responsible for designating its own national authorities to oversee compliance, investigate potential breaches, and impose penalties where necessary. For companies found to be in breach of the prohibitions on AI systems, the consequences can be severe. The AI Act sets out significant financial penalties, with administrative fines of up to €35,000,000 or, if the offender is an undertaking, up to 7% of its total worldwide annual turnover for the preceding financial year, whichever is higher. That said, the power to bring enforcement will only become applicable from 2 August 2025, six months after the prohibitions took effect.  

What steps should organisations take?

In order to comply with the prohibitions and AI literacy requirements, organisations should take the following steps:

  • Produce an inventory of existing AI systems that are developed and deployed.
  • Perform an applicability assessment to identify any AI systems that are developed and deployed. See here for Hogan Lovells’ self-assessment tool.
  • Develop a governance framework to identify new use cases that could be prohibited.
  • Develop and roll-out organisation-wide training and awareness programme for AI, including an enhanced programme for those more involved in AI activities within the organisation.
  • Ensure that appropriate steps are being taken by relevant third party vendors and contractors in order to ensure an appropriate level of AI literacy.

 

 

Authored by Dan Whitehead, Jasper Siems, and Juan Ramon Robles.

Search

Register now to receive personalized content and more!