RSF withdraws from negotiations on European AI Act’s Code of Practice

The third version of the Code of Practice for the European Union (EU) regulation on artificial intelligence, the AI Act, remains largely insufficient. After having participated unsuccessfully in negotiations for six months, Reporters Without Borders (RSF) condemns the absence of safeguards for the right to information and the tech industry’s overwhelming influence in the process. The organisation is ending its contribution to drafting the text and withdrawing from the negotiating table.

After three months of negotiations under increasing pressure from tech giants, on 11 March 2025, the European AI Office published the third working version of the AI Act's Code of Practice. The verdict is unequivocal: issues concerning the information space have gradually been removed from what is supposed to be a self-regulation tool for AI developers, designed to demonstrate their adherence to the principles outlined in the AI Act, which came into force on 1 August 2024.

It is vital that the Code of Practice focus on safeguaring trustworthy information — yet these protections are lacking as the right to information is not even mentioned in the text. Nor does it touch on the risks unregulated development of AI presents for trustworthy information, deepfakes being one example; the proliferation of AI-generated fake news sites; or the disinformation embedded in chatbots. The protection of fundamental rights, such as dealing with systemic risks AI could have on democratic elections, have been relegated to an appendix, and their consideration is optional.

Without prospects for addressing these fundamental issues before the final version of the text, scheduled for May, RSF, which had participated in drafting the Code, announced its withdrawal from negotiations.

“RSF has decided to slam the door on an exercise now doomed to fail. The Code does not contain a single concrete provision to combat the proven dangers AI poses to accessing reliable information. Democratic issues cannot be sidelined to an appendix, as they currently are in this text. Defending the right to information is not optional, and it is inexcusable that a European text neglects it to this extent, even provisionally. We have not been heard, and we will not play the role of ‘useful idiots.’ Instead of encouraging a flawed self-regulation effort backed by the European Commission, institutions must guarantee the democratic regulation of the tech that will profoundly reshape the future of journalism.

Thibaut Bruttin
RSF Director General

Since December 2024, RSF has advocated for the code to adequately protect the right to reliable information by every means possible: public communications, participation in meetings, direct engagement with the chairs of working groups, and contributions to the text. In February 2025, during consultations on the second version of the text, the NGO denounced the absence of concrete measures to protect journalism and access to reliable information in the new generative AI ecosystem. The third version of the text does not fill this enormous gap. This omission is even harder to understand given the growing role of generative AI systems in spreading disinformation.

For RSF, it was clear that the Code's purpose was to address gaps in AI regulation and oblige AI developers to mitigate the systemic risks to the information space. That objective, however, has evidently been completely dropped by those drafting the text.

Published on
Updated on 02.04.2025