After extensive negotiations, the European Union reached a historic agreement to regulate artificial intelligence (AI), a milestone on the global technological scene. European Commissioner Thierry Breton, responsible for a series of laws on the continent, including those governing social media and search engines, announced the agreement, highlighting its historic character. This pact puts the European Union ahead of the United States and Brazil in the race to regulate AI and protect the public from risks associated with the technology.
Establishing rules to control the use of programs of this type is complex. AI has been incorporated into science, the financial system, security, health, education, advertising, and entertainment, most of the time without the user realizing it. Regulation in any country that proposes it must balance reducing the risks of misuse, avoiding discrimination against minority groups, and guaranteeing privacy and transparency for users.
Through Secretary of State for AI Carme Artigas, Spain played a tie-breaking role in the negotiations, with support from France and Germany, despite concerns from technology companies in those countries about lighter regulations to foster innovation.
An essential aspect of the agreement is the ban on real-time surveillance and biometric technologies, including emotional recognition, with specific exceptions. These technologies can only be used by the police in exceptional situations, such as terrorist threats, searching for victims, or investigating severe crimes.
The agreement is based on a risk classification system, where the strictest regulation applies to machines that pose the most significant risk to health, safety, and human rights. This new definition directly impacts models such as GPT-4 from OpenAI, which would be included in the highest-risk category.
The agreement also imposes significant obligations on AI services, including ground rules on disclosing data used to train machines. The European Parliament and the Commission have sought to ensure that the development of AI in Europe occurs human-centered, respecting fundamental rights and human values.
Brazil’s regulatory framework
Brazil was one of the pioneers in proposing the regulation of artificial intelligence. The Chamber of Deputies began discussing law in February 2020, even before ChatGPT shed light on the power of technology and the European Union started its internal debate. However, the country has failed to pass the legislation so far.
The discussions evolved with the intervention of a commission of jurists, which reformulated the original 2020 project. The Senate is deliberating on a new proposal, reported by Senator Eduardo Gomes (PL-TO).
However, the dynamism of innovation in artificial intelligence poses clear challenges to Brazilian legislators. Technological acceleration highlights the need for constant updates to legislation, and there is even room for an imaginative process of what could happen in a few years. Lawmakers need to consider that AI has opened a field of exponential evolution different from what was experienced with Moore’s Law. This concept establishes that the processing power of computers doubles every 18 months.
The proposal seeks a normative approach, establishing guidelines for various AI applications, from credit scoring to facial recognition in public security, the latter with a ban.
From a global perspective, Taiwan, starting its discussions in 2019, has not yet consolidated a regulatory framework. The island, home to TSMC, a world leader in the production of chips and semiconductors and a supplier to Nvidia, opted for laws to encourage technological development, exempting AI companies from specific regulations and taxes.
China is the only country with a regulatory framework on AI, implemented by its internet regulatory body and not via legislation. Based on studies by the Cyberspace Administration of China, its rules focus on AI platforms’ morality, ethics, transparency, and responsibility.
Countries like Chile, Colombia, Costa Rica, Israel, Mexico, Panama, the Philippines, and Thailand are also developing regulations.
The discussion about federal AI legislation in the United States is not yet a reality, with responsibility being delegated to the states. President Joe Biden brought together AI industry leaders in July to discuss technological security and reliability.
Globally, 21 countries have already implemented specific laws for AI, emphasizing Chile in combating AI fraud, Sweden in autonomous cars, and Spain against discriminatory bias. Additionally, 13 countries have jurisprudence related to AI, covering everything from copyright to privacy. Despite being a pioneer in the discussion, Brazil is still not among these nations.
Source: Exame