In his latest book, NEXUS: A Brief History of Information Networks from the Stone Age to AI, celebrated Israeli writer YUVAL NOAH HARARI takes a critical and in-depth look at artificial intelligence (AI) and its implications for humanity.
Some main points of concern can be highlighted, such as the inequality brought by AI. Harari understands that if developed and implemented in an uncontrolled and unequal way, AI can worsen existing social and economic disparities, since countries and corporations that master the technology will have a clear edge over society, concentrating power and wealth.
Another concern that Harari addresses is the possible manipulation of reality and disinformation, since AI has the potential to manipulate reality in an intricate way, creating facts and spreading disinformation on a large scale, which has already been occurring, especially in times of elections. The Author warns of the danger of a post-truth society, where the distinction between fact and fiction becomes increasingly difficult.
One of the most interesting and troubling issues brought up in the book touches on the intricate relationship between AI and the creation of social credit systems. The Author argues that AI, by processing large amounts of data about individuals, can be used to create complex and comprehensive social classification systems, which would have an enormous potential to at rock-bottom embarrass, or completely exclude individuals.
And that is a point made by the Author of NEXUS, insomuch as AI allows collecting and analyzing a massive amount of data about individuals, from consumption habits to biometric data and health information. Based on this data, AI would be able to build up thorough behavioral profiles, allowing to sort individuals into different categories.
These profiles can be used to build a SOCIAL CREDIT system, where each individual receives a score that reflects their reliability, compliance, and social value.
The so-called social credit poses undesirable risks, such as the loss of privacy, discrimination, social inequality, social control by governments and corporations that use the system, political manipulation – as has already happening, given the scandal in the USA, the largest democracy in the world, in the case of Cambridge Analytica –, the loss of individual autonomy, the creation of a caste society, and the strengthening of authoritarianism.
To prevent AI from enabling, from social credit systems, a dystopian future, where privacy is a luxury, freedom is restricted, and inequality is institutionalized, the European Union’s Artificial Intelligence Act, European Regulation No. 2024/1689, known as the AI Act, prohibits systems that implement manipulative techniques which are used for social scoring, that is, to classify people based on their social behavior, socioeconomic status, or personal characteristics.
And it did so, as it understood that such social credit systems could “lead to discriminatory results and the exclusion of certain groups… in addition to violating the right to dignity and non-discrimination and the values of equality and justice,” which only assents to concerns raised by the Israeli writer.
Within the domestic legal framework, there is still no specific law that explicitly prohibits the creation of social credit systems. The absence of specific legislation on the subject mainly stems from the fact that the technology and implications of social credit systems are relatively recent and constantly evolving. In addition, technology is swiftly developing, and, to that extent, creating specific laws for each new technology can be a slow and bureaucratic process.
However, several laws and constitutional principles can be used to limit or restrict the implementation of social credit systems in Brazil, such as the Federal Constitution and its fundamental rights and guarantees, such as the Right to Privacy, Equality, Human Dignity, and Freedom.
The General Personal Data Protection Law (LGPD) provides for rules for the processing of personal data, limiting the collection, use, and sharing of information. Any social credit system that uses personal data should comply with LGPD and, needless to say, the Constitution.
The Consumer Protection Code would certainly require the collection and use of personal data for credit purposes to abide by consumer rights, especially its guiding principles, such as the right to information and transparency.
The Civil Rights Framework for the Internet establishes principles for internet governance in Brazil, such as net neutrality and privacy protection.
Therefore, it is important to note that the absence of a specific law does not mean that social credit systems are allowed in Brazil.
Therefore, de lege ferenda, it is necessary to think of laws that specifically protect individuals against possible social credit systems, especially considering the fact that such systems imply complex issues, such as ethics, privacy, security, and human rights.
Autor: Maurício Aude • email: mauricio.aude@ernetoborges.com.br • Tel.: + 5565 99981 0853