Tech giants commit to identifying AI-generated content - a step forward for the right to online information at last!

The joint undertaking by several tech giants to identify AI-generated content on their platforms is a necessary and positive step, Reporters Without Borders (RSF) says, but it urges them to go further by ensuring the traceability of all online content and by helping to promote the visibility of reliable content.

The announcement by Google, Microsoft, TikTok, Meta and other tech giants on 16 February is a significant step forward, and one that RSF had been calling for. In a year in which half of the planet will be called on to vote, they announced that they have signed a collective pledge to take measures to combat the risks that artificial intelligence (AI) poses to elections.



In particular, these companies have undertaken to identify content generated by AI. Meta led the way by giving a very clear public commitment on 6 February that it would label AI-generated images that users post on its social media – Facebook, Instagram and Threads – in the coming months.

 To do this, the companies are going to use several standards for authenticating content that are already integrated into image-generating AI systems such as those developed by OpenAI, Adobe, Midjourney and Google. In addition, they say they are continuing to seek ways to identify audio and video content in an efficient manner.

RSF welcomes these undertakings, which contribute to the public's right to information by guaranteeing better access to the sources of the content produced. But it calls on the leading platforms to now go further, and to also specify in the metadata whether content, particularly photographic content, was taken by a camera or a smartphone and, even more so, whether it came from a media outlet that credited its source. This is vital information for the public. 

“Generative AI is leading the online information arena towards an era of suspicion, one in which being able to identify the source of content has become a major issue. The undertaking by platforms to identify AI-generated content is a major step forward for the public's right to reliable news and information. To take full advantage of these authentication standards, the next step is to make is just as easy to be informed of the source of content that is not AI-generated, and content that comes from a journalistic news source. RSF calls on them and the platforms, to generalise the use of these technologies to facilitate the traceability of online content.”

Vincent Berthier

Head of RSF’s Tech Desk

Given the influence over access to information exercised by the platforms that signed this agreement, they clearly have an essential role to play with regard to enabling the traceability of content across all of their online arenas.

As for the media themselves, the traceability of their content is the sixth principle of the RSF-initiated Paris Charter on AI and Journalism. The sixth principle says: “Media outlets should, whenever possible, use state-of-theart tools that guarantee the authenticity and provenance of published content, providing reliable details about its origin and any subsequent changes it may have undergone. Any content not meeting these authenticity standards should be regarded as potentially misleading and should undergo thorough verification.

This offers a two-fold advantage. It guarantees information production transparency for the public, and provides journalists with better protection against the growing use of deepfakes. The British media reported on 7 February that a deepfake version of TV news presenter Mary Nightingale had been used in a bogus TV news report made for advertising purposes.

Published on
Updated on 22.02.2024