Paris AI Summit: RSF releases seven recommendations to ensure AI declaration translates into concrete action

The final declaration of the Paris AI Action Summit, titled “Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet” was signed by 62 signatories. Reporters Without Borders (RSF) is pleased that the final version of this text includes a reference to information integrity — but warns that the declaration stops short of committing to concrete measures.
With 32 more signatories than the Bletchley Declaration on AI — signed in 2023 after the UK AI Safety Summit — the Paris declaration could have served as a blueprint for AI governance that respects the right to information. The inclusion of risks to information integrity — a concept entirely absent from the declaration made at the 2024 AI Seoul Summit — shows an increased awareness of risks posed by AI, particularly the spread of disinformation. Yet the final declaration falls short by failing to offer practical steps to regulate the industry, and the technical solutions proposed overlook the urgent need for robust regulation.
“It is essential that the Paris AI declaration acts as more than just a principled petition. The final text affirms the signatory states’ willingness to “address the risks” AI poses to the integrity of information while encouraging innovation — but the AI Action Summit must yield concrete measures for stronger regulation. In addition,the media must participate in AI governance and responsibly leverage tools that advance journalism while respecting ethical standards.
Throughout the duration of the summit, numerous news reports stressed the urgent need for stricter regulation to ensure that AI is safely deployed in the information space. A study by the British Broadcasting Corporation (BBC) study showed that chatbots are incapable of accurately summarising news articles. In France, over 40 media outlets have taken legal action to block an AI-operated website that reportedly publishes over 6,000 articles stolen from French publishers every day, according to a joint investigation by the daily Libération and the news site Next. Abundant evidence demonstrates the range of risks the generative AI industry poses to journalism.
The seven RSF policy recommendations to ensure the AI industry respects journalism and the right to reliable information
Hold AI developers accountable
-
AI service developers must be held accountable for any harm their products cause to citizens’ right to reliable information or to journalists’ ability to work freely. In this regard, the European Union’s removal of its AI liability directive sends a deeply regrettable signal.
-
Developers using data produced by media outlets must obtain consent and compensate both creators of journalistic content and press publishers. They should also negotiate collectively with media outlets and allow creators to opt in or out of having their work used.
Preserve journalism within AI systems
-
AI systems designed to produce or disseminate information must rely on reputable journalistic sources, and reproduce their content without compromising its integrity. These systems should link back to the original sources of the content they display and include mechanisms to flag errors in the AI’s summary. If a technology — such as Apple Intelligence — does not meet these standards, it should not be permitted for this use on the market.
-
These AI systems must also promote a pluralistic, diverse presentation of information by selecting sources based on objective criteria for quality and independence, such as the standards set by the RSF Journalism Trust Initiative.
Regulate deepfakes
-
A strict legal framework is needed to combat the proliferation of harmful deepfakes, with criminal penalties for the intentional publication of falsified content intended to manipulate information or undermine the credibility of journalists or media outlets.
-
Platforms that distribute news content should be required to prioritize authentic material in their recommendation algorithms and trace the origin of AI-generated content. To this end, governments should encourage — and, if necessary, subsidise — the integration of authentication standards in AI tools for journalism, similar to the recent successful trial by Agence France Presse (AFP) certifying the origin of its photos.
Foster innovative initiatives to protect the right to reliable information
-
Public funds should be invested in new projects that uphold the right to reliable information, to overcome the challenges facing democratic societies. The media industry must be a part of this process, and public funds should be allocated to developing ethical technologies that respect journalistic values, as seen with the Spinoza Project, an AI tool for journalists developed by RSF and the French media alliance l’Alliance de la presse d’information générale (l’Alliance).