
Trump Administration Executive Order (EO) Tracker
On April 22, 2025, the AI Office within the EU Commission launched a multi-stakeholder consultation to assist in the preparation of guidelines aiming at clarifying the scope of the rules for providers of GPAI models in the AI Act. The guidelines should clarify what constitutes a GPAI model, define "placing on the market," or explain the European AI Office's role in compliance.
The application of the requirements that apply to providers of GPAI models are currently subject to a range of interpretative challenges. This consultation is therefore a critical opportunity for companies that are involved in the development of AI to shape the future scope of this framework. Particular clarification is vital in areas such as which types of models should be considered to be general-purpose, who should be considered a provider of these models and in what circumstances the fine-tuning of existing GPAI models by downstream providers could bring them within the scope of the requirements.
General Purpose AI (GPAI) models are advanced AI programs designed to perform a wide array of distinct tasks with competence. These models are trained on vast datasets using large-scale self-supervision techniques, which endow them with significant versatility and generality. A well-known example of a GPAI model is ChatGPT, which was trained on hundreds of billions of words to generate human-like responses. Unlike AI systems, which integrate AI models with additional software and hardware components to facilitate user interaction, GPAI models serve as the core intelligence that requires integration to be operational.
The AI Act imposes several obligations on providers of GPAI models. All providers must prepare and maintain comprehensive technical documentation about the model, including details of its training and testing processes, and provide transparency information to AI system integrators. They must also comply with EU copyright rules and publicly disclose a summary of the model's training content. Non-EU providers are required to appoint an authorized representative within the EU to ensure compliance. For GPAI models with systemic risk, additional obligations apply. Providers must assess and mitigate potential systemic risks at the EU level, conduct adversarial testing to identify such risks, and ensure adequate cybersecurity protection. They are also required to report any serious incidents to the relevant authorities without undue delay.
The EU Commission is working towards establishing clear regulations for GPAI models and is inviting stakeholders to share their experiences and insights. This effort is part of a consultation to develop guidelines that will accompany the AI Act, aiming to make it more accessible and practical.
This initiative is part of a targeted consultation aimed at developing comprehensive guidelines that will accompany the forthcoming AI Act. These guidelines are designed to make the AI Act more accessible and practical for all involved parties.
The guidelines will provide detailed clarifications on:
While the guidelines will not be legally binding, they are intended to offer crucial insight into how the Commission – responsible for supervision and enforcement under the AI Act – will interpret and apply the rules.
The Commission is extending an invitation to a broad range of stakeholders, including providers of GPAI models, downstream AI system providers, civil society, academia, experts, and public authorities, to participate in this consultation.
In addition to this initiative, the Commission will soon launch another targeted consultation focusing on the classification of AI systems as high-risk, further supporting stakeholders in navigating the AI Act's requirements.
Stakeholders can submit their contributions by May 22nd, 2025.
Through its consultation, the European Commission has already expressed its viewpoint on certain elements that deserve particular attention and could also lead stakeholders to participate in the consultation to provide their opinions on the matter. For example:
The pace of AI regulation is about to pick up significantly. Both the guidelines and the final Code of Practice are expected to be released before August 2025, marking a major milestone in the EU’s evolving approach to governing artificial intelligence.
This consultation represents a vital opportunity for professionals in the AI sector to influence the development of practical and economically viable regulations. By participating, stakeholders can ensure that the guidelines reflect real-world applications and challenges, ultimately leading to more effective and sensible policy implementation.
On April 23, 2025, the European Data Protection Board (“EDPB”) reaffirmed in its annual report that it actively participates in cross-regulatory cooperation and contributes to consultations such as this one, showing both the need to articulate the GDPR and the AI Act, but also, the “coopetition” between regulators. Stakeholders participation is all the more important to bring a global vision to the European Commission.
Authored by Etienne Drouard, Dan Whitehead, Schlich Rémy, and Sarina Singh.