EU’s Artificial Intelligence Act must safeguard right to reliable news and information, says RSF

Reporters Without Borders (RSF) calls on the European Union to strengthen safeguards for the right to reliable news and information in its future Artificial Intelligence Act (or AI Act), now in the final stage of being negotiated.

The European Commission, European Parliament and Council of the European Union were due to meet today (24 October) for trilogue negotiation on the AI Act’s final form. 

The challenge is historic. The three parties must agree on an initial regulatory framework for artificial intelligence systems, with the aim of ensuring that they serve society rather than harm it. To this end, the AI Act classifies the different AI systems according to the risks they could pose to democracy.

But there is a flaw. AI systems producing or distributing news and information are not currently considered “high risk.” If they were, they would be subject to stricter requirements before they could be placed on the market. This would provide the European public’s right to reliable news and information with stronger safeguards.

“RSF thinks the goal of the classification proposed in the AI Act is good but we urge negotiators to go further. All systems intended to produce news information or interfere with the flow of news and information should be considered 'high risk' and should be subject to radical evaluation criteria before they can be placed on the market.”

Vincent Berthier

Head of RSF’s Tech Desk

 

Recommendations for an AI sector that respects the right to information

To ensure that artificial intelligence systems provide greater safeguards for the European public’s right to reliable news and information, RSF proposes incorporating the following elements into the AI Act: 

  • The databases used to train algorithms must respect the requirements of pluralism and accuracy and must not include content that is false, misleading or deceptive, or constitutes propaganda.
  • Content generated by large language models still in the supervised training phase must be verified by media and information professionals instead of simply being evaluated on the basis of its plausibility.
  • Chatbots used by the general public or professionals to obtain information must be programmed so as not to answer questions for which they do not have the answer. They must, moreover, systematically invite people to consult the sources they use to produce their content.
  •  The content produced by chatbots that are used to obtain information must be based on sufficiently diverse sources to guarantee pluralism.
Published on
Updated on 24.10.2023