
Trump Administration Executive Order (EO) Tracker
Finding a European consensus around the regulation of artificial intelligence (AI) does not start with the adoption of laws. It results from their common interpretation and articulation within a broader digital regulatory framework, in the context of a fierce global race for innovation and power. While each stone makes a building, some of them provide some foundations for further accomplishments. These latest guidelines from the CNIL illustrate a more pragmatic approach than other regulatory positions addressing multiple issues raised by AI. The truth will depend on further consensuses, but there is still room for ambition and action.
On February 7, 2025, the French Data Protection Authority (CNIL), following a public consultation, has issued two new guidelines on (i) the information of data subjects and (ii) the respect for their rights in the context of AI models.
These new guidelines, together with several other guidance and recommendations in relation to AI already published by the CNIL, demonstrate the practical approach taken by the CNIL to balance innovation with the protection of individuals' rights and compliance with applicable regulations.
The CNIL approach also reflects a divergence at the EU level, which will necessitate arbitration and clear position from the European institutions and other EU regulators of digital services.
The CNIL's guidelines on informing data subjects about AI data processing emphasize the importance of transparency and accountability. Following the public consultation, the CNIL has refined its approach to balance the need for detailed information with practical considerations, particularly concerning indirect data collection and web scraping.
One of the key adjustments after the public consultation involves the details to be provided into the information notice regarding the sources of training data containing personal data. For scenarios where data is collected from numerous public sources, CNIL recommends providing an information notice that only indicates categories or typical sources of training data, rather than listing each source individually. For example, if an AI model is trained using data from numerous news websites, CNIL advises that the notice states that data is sourced from "online news outlets" rather than specifying each of them. CNIL recommends similar approach in the context of data scraping, by mentioning only that data may be collected from “social media sites”, without listing every platform individually. This method allows for transparency while acknowledging the practical limitations of providing exhaustive details.
The CNIL's guidelines also place a strong emphasis on respecting and facilitating the exercise of data subjects’ rights. Under GDPR, these rights include the right to access, rectify and erase personal data, as well as the right to object to some of their processing.
As part of the key recommendations, the CNIL guidelines highlight the importance of establishing clear mechanisms for responding to data subjects’ requests for rectification. For instance, if a data subject requests the correction of inaccurate personal data used to train or to run an AI model, the developer should implement a verification process and update the data promptly. The CNIL suggests using version control systems to track changes and ensure that rectifications are applied consistently across datasets.
The CNIL also recommends using filtering techniques to manage data outputs without necessitating a complete re-training of the AI model. This specific recommendation for ‘outputs’ seems much more practical than the right of access or rectification applied to training data (‘inputs’). For example, if a data subject requests the removal of their data from an AI system, filters can be applied to ensure that the data is not used in future outputs. This approach allows for compliance with data subjects’ rights without the need for resource-intensive re-training processes.
Indeed, the CNIL acknowledges that the exercise of data subjects’ rights must be balanced against operational realities. Therefore, since re-training an AI model to accommodate a data subject's request is deemed disproportionately burdensome, CNIL suggests that alternative measures such as data anonymization or pseudonymization should be considered. It provides examples of scenarios where such measures can be effectively implemented at ‘input’ or even ‘scraping’ levels, to protect data subjects' rights, while minimizing further operational disruptions.
The timing of the CNIL approach reflects a wise communication strategy in the context of the AI Action Summit which took place right after in Paris (Feb. 10-13, 2025) and the massive private investments appraised by the French government for the coming years for the development of AI infrastructures (mainly data centers) and tools and to support EU-based companies developing AI-based services and products.
The CNIL still recognizes the need to balance data protection with the practical realities of AI-related developments. It has shown flexibility in its previous approach of legitimate interest as a workable legal basis, as well as in addressing with these newly published guidelines the powerful feedbacks and concerns expressed during the consultation process by numerous industry professionals, researchers, and civil society organizations, to ensure that data protection measures are both effective and feasible.
This contrasts with the general approach at the EU level. The European AI Act, while essential for establishing a governance framework for AI, has for example been criticized for its complexity. The contrast between the CNIL's flexible approach and the more rigid stance at the EU institutions’ level (Parliament and Commission) underscores the need for a balanced and coordinated regulatory environment that supports both data protection – among others EU values – and innovation.
This also highlights the urgent need for harmonization among EU data protection authorities. A unified EU approach could provide a consistent regulatory environment that supports innovation, balancing the need for clear guidelines with the flexibility required to adapt to the rapidly evolving AI landscape. Ultimately, the success of AI and privacy regulations in Europe will depend on finding this balance, ensuring that individuals’ rights are protected without impeding the progress of AI technologies. As always with AI and innovation, articulating complexity with timing is of essence.
Authored by Etienne Drouard, Julie Schwartz, Rémy Schlich, and Sarina Singh.
CNIL’s new guidelines on AI models and individuals’ rights: https://www.cnil.fr/en/ai-and-gdpr-cnil-publishes-new-recommendations-support-responsible-innovation