The newly elected leftist government of Lula in Brazil is in the process of developing its first law to regulate artificial intelligence. In December 2022, a Senate panel presented a report containing studies on AI regulation, along with a draft for AI regulation.
The main aims of the legislation are to safeguard the rights of individuals affected by AI systems, categorise the level of risk associated with these systems, and establish governance measures for companies that provide or operate AI systems.
The draft shares similarities with the European Union’s (EU) draft AI Act. The definition of AI systems in the Brazilian draft closely aligns with the EC’s draft definition. Similar to the AI Act, the draft proposes risk categories and corresponding obligations. Prohibited AI systems include those that exploit vulnerabilities of specific groups of individuals with the intention to harm their health or safety. Social scoring by public entities and the use of biometric identification systems in publicly accessible spaces are also prohibited, except when explicitly authorised by specific laws or court orders, such as for criminal investigations.
The Brazilian draft, like the AI Act, identifies high-risk systems that are sensitive to fundamental rights. These include AI systems used in critical infrastructure, education and vocational training, recruitment, autonomous vehicles, and biometric identification. The list of high-risk systems can be adjusted by a designated authority, and such systems will be publicly listed in a database.
The draft grants data subjects rights against providers and users of AI systems, regardless of the risk level. These rights include access to information about their interactions with AI systems, the right to receive explanations for decisions made by AI systems within 15 days of request, the right to challenge decisions that significantly affect their interests or have legal effects, the right to human intervention in decisions made solely by AI systems, the right to non-discrimination and correction of biased outcomes, and the right to privacy and protection of personal data.
Governance measures are also addressed in the draft, akin to the AI Act. Providers and users of AI systems are required to establish internal structures and processes that ensure the safety of AI systems. High-risk AI systems necessitate more stringent measures, such as conducting publicly available AI impact assessments, which may need to be periodically repeated.
Additionally, the draft includes provisions related to reporting serious security incidents to the competent authority and regulations concerning civil liability. Like with GDPR, the penalties for non-compliance vary depending on the violation, but can include fines of up to 50 million Brazilian reals (approximately 9 million euros) or up to 2% of a company’s turnover.