Brazilian AI bill: to pioneer information rights, the Senate must stand firm against tech companies

Instead of approving a groundbreaking bill to protect the information landscape in the age of artificial intelligence (AI), the Brazilian Senate has changed course, opting for a consultation phase dominated by private sector stakeholders. Reporters Without Borders (RSF) calls on the upper house of Congress to exercise the utmost vigilance as the bill – which would be the first in the world to recognize the vital role of the right to information in AI regulation – must be strengthened, not weakened, and passed without delay. 

In mid-June 2024, the Brazilian Senate was poised to vote on a bill establishing rules to protect the right to information in the development and use of AI systems. The text’s approval was ultimately postponed to July, so the Chamber of Deputies could garner opinions from various organisations, notably entities from the private sector who have already been consulted, such as Meta, Google, Amazon and Microsoft. RSF fears this last-minute consultation could weaken the concrete guarantees provided in the current version of the bill that protect the right to reliable information.

"With this bill, Brazil would be the first country in the world to recognize the importance of the right to information in AI regulation. This issue, essential for the proper functioning of our democracies, is too often neglected – and sometimes deliberately omitted – in initiatives aimed at regulating this technology. We urge the Brazilian Senate to resist corporate pressure and strengthen the guarantees to protect the informational space.

Arthur Grimonpont
Head of the AI and Global Challenges Desk at RSF

A Bill Enshrining the Right to Information

Several articles in the current text align with RSF's recommendations on AI and information, and must be maintained:

  • AI and the right to information: As recommended by RSF, the current version of the text considers the AI systems playing a central role in "the production, curation, dissemination, recommendation, and distribution of large-scale and significantly automated content" as high-risk, subjecting them to strict requirements. It also expands  the high-risk category to include other systems that could potentially threaten the quality of online information, and thereby degrade democracy and pluralism. Maintaining these guarantees is especially crucial given the  absence of Brazilian laws governing online information. 
  • Requirements for general-purpose AI models: The designers of these systems – such as OpenAI, the creators of ChatGPT – should demonstrate, through tests and analyses, "the identification, reduction, and mitigation of reasonably foreseeable risks to fundamental rights," as well as to "information integrity, the democratic process, and the spread of misinformation."
  • Rights and remuneration for content creators. The current version of the bill requires AI developers to disclose the data sources used to train their models, and to respect copyright laws for any commercial use, including content that isn’t an exact reproduction. Intellectual property owners would be free to oppose or consent to the use of their work and, if applicable, negotiate their remuneration individually or collectively with AI developers. RSF advocates for prioritising collective negotiations, which helps ensure fair compensation for content creators.
     

A text with room for improvement

While the Brazilian AI bill currently represents a significant advancement in bolstering the right to information, RSF recommends that the government incorporate the following principles to further stamp out mis- and disinformation:

  • Impose a requirement to amplify reliable sources, identified as such by certified standards like the Journalism Trust Initiative (JTI), on any AI model that plays a role in disseminating information;
  • Create a liability framework for the creation and dissemination of deepfakes that applies to both companies developing AI systems and their users;
  • Ensure the independent evaluation of high-risk and general-purpose AI systems, particularly those playing a central role in information dissemination.

 

Image
82/ 180
Score : 58.59
Published on