Standard tools for distinguishing “deepfakes” from real content must be created quickly, says RSF

The development of artificial intelligence has made it possible to quickly create extremely convincing fake representations of well-known persons or journalists. To prevent such “deepfakes” being used to mislead and manipulate the public information arena, Reporters Without Borders (RSF) says the development of tools for systematically recognising AI-generated content must be speeded up.

Slovak journalist Monika Tódová’s voice was synthetically created in an audio file by AI in September. A fake audio clip created the illusion of a conversation in which she and the leader of the Progressive Slovakia party were organising electoral fraud. The public had no way to clearly identify this as an AI-produced deepfake that was being circulated for the purposes of political destabilisation.

In RSF’s view, there is an urgent need to be able to distinguish between an AI-generated synthetic product and authentic content. RSF is therefore encouraging the development of international technical standards acting at the level of content metadata, that is to say in the digital content creation information that accompanies the content.

“Deepfakes poison reliable news and information. Used in the public information arena without safeguards, they not only intoxicate the public, but also undermine its trust in online news reporting. In RSF’s view, technical standards must be developed and massively imposed that make it possible to identify whether content, a photograph, for example, has been created or modified by AI or whether it is authentic. This information is fundamental to ensure that false AI-generated images do not contaminate representations of reality.”

Vincent Berthier

Head of RSF’s Tech Desk

 

Governments must play a role

Non-governmental initiatives have already been launched with the aim of developing such standards. They include those by the company OpenOrigins and by the Coalition for Content Provenance and Authenticity (C2PA), which has brought together media such as the BBC and tech companies such as Adobe and Microsoft. But none of these techniques has so far been universally adopted. Democratic governments must encourage the deployment of international, interoperable standards, and impose their integration into AI systems.

In Europe, the proposed AI Act offers the ideal framework to ensure that AI is put at the service of the right to information. This proposed legislation should treat AI systems that produce deepfakes as high risk and should require AI designers to integrate standards for authenticating content generated by their machines.

The deployment of these standards would enable all those who play a role in processing and circulating information, including journalists, to assign to each content its true value. This principle must also apply to large platforms. There should be no question of allowing deepfakes to circulate if they pose a threat to access to reliable news and information.

Published on
Updated on 21.11.2023