News

EU Commission seeks stakeholder contributions to shape guidelines on GPAI models

""
""

On April 22, 2025, the AI Office within the EU Commission launched a multi-stakeholder consultation to assist in the preparation of guidelines aiming at clarifying the scope of the rules for providers of GPAI models in the AI Act. The guidelines should clarify what constitutes a GPAI model, define "placing on the market," or explain the European AI Office's role in compliance. 

The application of the requirements that apply to providers of GPAI models are currently subject to a range of interpretative challenges. This consultation is therefore a critical opportunity for companies that are involved in the development of AI to shape the future scope of this framework. Particular clarification is vital in areas such as which types of models should be considered to be general-purpose, who should be considered a provider of these models and in what circumstances the fine-tuning of existing GPAI models by downstream providers could bring them within the scope of the requirements. 

What is a GPAI model under the AI Act? 

General Purpose AI (GPAI) models are advanced AI programs designed to perform a wide array of distinct tasks with competence. These models are trained on vast datasets using large-scale self-supervision techniques, which endow them with significant versatility and generality. A well-known example of a GPAI model is ChatGPT, which was trained on hundreds of billions of words to generate human-like responses. Unlike AI systems, which integrate AI models with additional software and hardware components to facilitate user interaction, GPAI models serve as the core intelligence that requires integration to be operational. 

The AI Act imposes several obligations on providers of GPAI models. All providers must prepare and maintain comprehensive technical documentation about the model, including details of its training and testing processes, and provide transparency information to AI system integrators. They must also comply with EU copyright rules and publicly disclose a summary of the model's training content. Non-EU providers are required to appoint an authorized representative within the EU to ensure compliance. For GPAI models with systemic risk, additional obligations apply. Providers must assess and mitigate potential systemic risks at the EU level, conduct adversarial testing to identify such risks, and ensure adequate cybersecurity protection. They are also required to report any serious incidents to the relevant authorities without undue delay. 

What is the purpose of the EU Commission's guidelines? 

The EU Commission is working towards establishing clear regulations for GPAI models and is inviting stakeholders to share their experiences and insights. This effort is part of a consultation to develop guidelines that will accompany the AI Act, aiming to make it more accessible and practical. 

This initiative is part of a targeted consultation aimed at developing comprehensive guidelines that will accompany the forthcoming AI Act. These guidelines are designed to make the AI Act more accessible and practical for all involved parties. 

The guidelines will provide detailed clarifications on: 

  • What constitutes a general-purpose AI model? -Which entities qualify as providers in different contexts? 
  • What actions fall under the “placing on the market” definition? 
  • How the European AI Office will assist stakeholders with compliance? 
  • How signing the Code of Practice, once approved by the AI Office and the AI Board, can reduce the administrative burden for providers and act as a compliance benchmark? 

While the guidelines will not be legally binding, they are intended to offer crucial insight into how the Commission – responsible for supervision and enforcement under the AI Act – will interpret and apply the rules. 

The Commission is extending an invitation to a broad range of stakeholders, including providers of GPAI models, downstream AI system providers, civil society, academia, experts, and public authorities, to participate in this consultation. 

In addition to this initiative, the Commission will soon launch another targeted consultation focusing on the classification of AI systems as high-risk, further supporting stakeholders in navigating the AI Act's requirements. 

Stakeholders can submit their contributions by May 22nd, 2025. 

What are the key points already explored by the EU Commission? 

Through its consultation, the European Commission has already expressed its viewpoint on certain elements that deserve particular attention and could also lead stakeholders to participate in the consultation to provide their opinions on the matter. For example: 

  • the criteria to determine what constitutes a GPAI: a model is considered a GPAI if it can generate text or images and has been trained with more than 10^22 floating-point operations (FLOP), which is a measure of computational power; but even if a model cannot generate text or images, it may still be classified as a GPAI if it demonstrates a similar level of generality and capability; 
  • the methodology to estimate the amount of compute used: the AI Office suggests two methods for estimating the computational resources used in training or modifying an AI model: one is a hardware-based approach, which involves measuring the usage of Graphics Processing Units (GPUs), and the other is an architecture-based approach, which involves calculating the expected number of floating-point operations (FLOP) based on the model's design; 
  • the definition and criteria to determine which entities qualify as providers in different contexts: article 3(3) of the AI Act defines a provider “a natural or legal person, public authority, agency or other body that develops (…) a general-purpose AI model developed and places it on the market». The Commission clarifies which entity is considered to be subject to the obligations for providers of GPAI models, by providing various illustrative examples. E.g. : If Entity A has a general-purpose AI model developed for it by Entity B and Entity A places that model on the market, then Entity A is the provider.

Why should all stakeholders take part? 

The pace of AI regulation is about to pick up significantly. Both the guidelines and the final Code of Practice are expected to be released before August 2025, marking a major milestone in the EU’s evolving approach to governing artificial intelligence. 

This consultation represents a vital opportunity for professionals in the AI sector to influence the development of practical and economically viable regulations. By participating, stakeholders can ensure that the guidelines reflect real-world applications and challenges, ultimately leading to more effective and sensible policy implementation. 

On April 23, 2025, the European Data Protection Board (“EDPB”) reaffirmed in its annual report that it actively participates in cross-regulatory cooperation and contributes to consultations such as this one, showing both the need to articulate the GDPR and the AI Act, but also, the “coopetition” between regulators. Stakeholders participation is all the more important to bring a global vision to the European Commission. 

 

Authored by Etienne Drouard, Dan Whitehead, Schlich Rémy, and Sarina Singh.

Search

Register now to receive personalized content and more!