The AI Act’s Code of Practice is Europe’s last opportunity to protect journalism from the tech industry's interests

Reporters Without Borders (RSF) urges the European Commission to ensure that the General-Purpose Artificial Intelligence (AI) Code of Practice includes specific provisions to protect journalism and reliable information. The NGO is concerned that this regulation may be weakened to benefit the interests of the tech industry, which plays an outsized role in shaping the legislation.

The European Commission released its draft of the General-Purpose AI Code of Practice, a code allowing AI providers to prove their conformity to the AI Act, in mid-November. RSF — who is involved in the drafting process — warns that the current version of the text has several weak points regarding the protection of journalistic information. The NGO has put forward proposals to address these gaps.

“The recent explosion of AI-driven disinformation clearly demonstrates that this technology is underregulated. The General-Purpose AI Code of Practice is one of the last opportunities to safeguard our right to trustworthy information, currently threatened by the short-term interests of a few tech industry players. The European Commission must not allow AI providers — who already have a privileged role in drafting the Code — to continue hijacking the legislative process. RSF demands that the Code include concrete protections for journalism and reliable information.

Arthur Grimonpont
Head of the RSF’s Global Challenges Desk

The risk of weak laws

The tech industry has been making considerable efforts to dilute European AI regulation. In 2023, when the EU AI Act was being prepared, the NGO Corporate Europe Observatory revealed that 78% of the meetings on AI involving senior European Commission officials were with industry representatives. This led to the adoption of weaker legislation in March 2024, which RSF criticised for overlooking serious risks to the circulation of reliable information.

The ongoing drafting of the Code of Practice presents an opportunity to reassess the role of journalism and reliable information in the regulation of general-purpose AI models, the complex algorithms that underpin AI systems such as chatbots and image generators.

The drafting process for the Code will extend until May 2025 and involves nearly 1,000 stakeholders. However, not all participants are on equal footing: AI providers are invited to drafting workshops while other stakeholders, such as academics and civil society organisations, are only invited to provide feedback on later versions of the text and to participate in working groups — where AI providers are also present.

RSF’s recommendations for strengthening the General-Purpose AI Code of Practice

Here are RSF’s main recommendations for improving the General AI Code of Practice:

  • Systemic risk taxonomy: RSF proposes that the definition of systemic risks related to AI be revised. The NGO advocates for reframing the risk of “persuasion and manipulation” into broader, clearer terms, focusing on violations of the right to reliable information and the protection of quality journalism. RSF also recommends adding the identity theft of public figures — including journalists — as a systemic risk, as it directly threatens both the reputation of these individuals and the integrity of democratic processes, such as elections.
  • Risk assessment and mitigation: RSF calls for stronger requirements for evaluating the security of AI models used in systems that play a structural role in the production and dissemination of journalistic information. The NGO stresses the importance of dedicating more human and financial resources to ensure reliable risk assessments, as current AI evaluations are severely limited by a lack of resources allocated by the industry to security. Additionally, RSF suggests that strict preventive measures be implemented to prevent the unauthorised use of AI models in high-risk contexts, particularly regarding access to journalistic information.
  • Copyright: RSF recommends improving transparency around the use of copyrighted works for training AI models, and ensuring more effective means for authors and rights holders — including journalists and media organisations — to negotiate and retain their rights.
Published on