RSF is concerned about Russia’s media regulator experiments with AI use in censorship

Concerned by reports that Roskomnadzor, the Russian telecommunications and mass media regulator, is experimenting with the use of artificial intelligence (AI) in controlling and censoring online information, Reporters Without Borders (RSF) reiterates its call for AI use to be regulated.

According to the independent investigative website Istories and the Russian pro-government business daily Kommersant, Roskomnadzor is conducting tests with AI to see how it can refine its control of news and information on Runet (the Russian Internet).

Confirming the reports to the Russian state news agency TASS, Roskomnadzor said the aim of the experiments was to see what role “neural networks” and large language models such as AI like ChatGPT could play in controlling information that the Russian authorities regard as “illegal.”

RSF is concerned that these experiments could open the way to new online censorship methods that are reinforced by AI. There is an urgent need to regulate the commercialization of AI technologies so that they are not used to stifle the free flow of reliable news and information, RSF says.

“Roskomnadzor’s censorship is already very aggressive. Assisted by AI, it could become unstoppable. These experiments should be seen as an additional threat to the public’s access in Russia to freely and independently reported information, which is already only possible by roundabout means. Whether in Russia or elsewhere, AI use must be regulated so that it cannot be exploited by press freedom predators.”

Vincent Berthier
Head of RSF’s Tech Desk

The current performance of AI systems suggests that their use by governments that censor could lead to increasingly sophisticated censorship techniques that go far beyond the current systems, which just search for prohibited keywords on the Internet and social media.  

Capable of analysing texts, images, videos and metadata, these AI technologies could, for example, enable regulators to determine, by associations of ideas or by using a precise vocabulary register, to identify and then suppress content that has been deliberately created in a way to escape censorship.

RSF calls for concrete action with regard to the commercialization of AI systems, which currently provide no safeguards for the right to information. To prevent AI use for censorship, democratic governments must force the designers of AI systems to act with the utmost due diligence as regards identifying state sector clients, auditing their use of the AI technology provided, and terminating their access if abuses are detected.

Image
162/ 180
Score : 29.86
Published on
Updated on 27.05.2024