United States: RSF supports the "COPIED Act" but calls for stronger measures to protect journalistic content
A bill aimed at regulating the use of journalistic content by AI developers has been introduced to the U.S. Senate. Reporters Without Borders (RSF) welcomes this first step towards recognizing media rights in the face of AI but urges lawmakers to address several weaknesses in the text.
On July 11, 2024, in Washington, DC, a bipartisan group of senators introduced a bill to protect journalists and artists against the unauthorized use of their works by AI models. The bill, titled the "Content Origin Protection and Integrity from Edited and Deepfaked Media Act" (COPIED Act), also aims to facilitate the authentication of AI-generated content through the development of appropriate technical standards.
“Bilateral partnerships concluded in recent months between media outlets and AI providers are neither a desirable nor viable solution. They pose a threat to the independence and pluralism of journalism as well as the sustainability of media excluded from negotiations. It is essential to develop a protection regime covering all journalistic content, and this bill is a first step in that direction. However, the text needs to be strengthened in several key areas, particularly regarding authenticity standards. RSF calls on American legislators to take our recommendations into account so they can pass a groundbreaking law that safeguards journalistic content as AI evolves.
Towards better protection of journalistic content in the U.S.
Currently, the fair use doctrine allows any journalistic content to be used for training AI models without any permission or compensation. The COPIED Act, supported by several media industry players including major trade associations News/Media Alliance and the National Newspaper Association, would be a significant advancement in recognizing the rights of content owners.
The bill obliges the National Institute of Standards and Technology (NIST) to develop guidelines and technical standards that would allow information about the origin of any text, image, audio, or video to be attached to the content. AI providers would only be able to use this labeled content with the explicit consent of their owners. This provision acknowledges media outlets’ right to set the terms, legal and financial, for the reuse of their content. This is a fundamental principle that RSF is currently advocating for within the European Union (EU).
Insufficient transparency requirements for AI developers
Unlike the European AI Act, the American bill does not require AI providers to give a detailed summary of the data used to train their models. Outside of the origin-tagged content generated by this law, creators and owners have no reliable means to know if their works were used to train AI models. RSF urges lawmakers to address this gap by extending the protection regime to all journalistic works already covered by copyright, and mandating AI developers to transparently disclose the protected content they have used.
Optional labelling of synthetic content
The COPIED Act requires developers and deployers of the AI systems used to generate synthetic content to offer their users the option to attach information about the content’s origin. The bill prohibits the removal, alteration, or falsification of these origin details.
RSF welcomes the initiative to develop standards to guarantee the synthetic origin of AI-generated or modified content, but points out this labelling must not be optional but mandatory, as required by the EU AI Act. Additionally, RSF believes this provision should be accompanied by criminal penalties for the intentional creation and publication of deepfakes that harm an individual or entity.