Brussels bans mass biometric surveillance and social scoring systems

The European Commission today presented its first regulation to regulate the use of artificial intelligence, one of the most revolutionary technologies but also one of those that generates the most fears due to the unethical use that can be given to it. Community authorities want to avoid an Orwellian use of AI by European governments, as is the case in China, where AI is being used to track and identify, for example, the Muslim minority persecuted by the country’s authorities, the Uyghurs. . But neither does he want the private sector to make use of it without clear rules.

The regulation, which has yet to be approved by the governments of the EU and the European Parliament, a process that can last more than a year, proposes to prohibit the use in public spaces of artificial intelligence systems that allow biometric identification, considering that it is “high risk” technology that violates the values ​​and fundamental rights of the EU.

Even so, a series of exceptions are established and it will allow its use in public spaces when it can contribute to avoid “an imminent” terrorist attack, find a missing minor or to locate, identify and prosecute an author or suspect of a serious crime. In these cases, it would always be done with judicial authorization and establishing limits in terms of duration and geographical scope.

European authorities also want to ban the use of artificial intelligence for social scoring systems (which determine a person’s reputation based on factors including activity on social media), such as the one applied by China to monitor its citizens. And systems that use “subliminal techniques” to bypass the will of users “and materially distort a person’s behavior in a way that can cause physical or psychological harm.” The Commission gives as an example the toys used by voice assistants, which can incite dangerous behavior in minors.

The regulation also proposes a special scrutiny to the artificial intelligence applications used in the classification of resumes for hiring processes, to evaluate and monitor the credit quality of a person or examine asylum seekers, among others. In the Commission’s view, artificial intelligence systems used for these purposes can also perpetuate historical patterns of discrimination in consumer finance, for example against people of certain ethnic or racial origins, or create new forms of discrimination.

“Artificial intelligence offers immense potential in areas as diverse as healthcare, transport, energy, agriculture, tourism or cybersecurity”, but it also presents “a series of risks and the proposal ensures that our values ​​and rules, “said Internal Market Commissioner Thierry Breton.

The EU wants to take the lead in regulating this technology, which, according to critics, may have harmful social effects, but which, according to advocates, contributes a factor of efficiency and economic growth. The Commission wants to repeat what it did in terms of data protection with the GDPR and try to establish international standards for the artificial intelligence sector.

In this regard, the Executive Vice-President of the Commission, Margrethe Vestager, stressed that “when it comes to artificial intelligence, trust is a necessity, not a place. With these historic rules, the EU is leading the development of new global rules. to ensure that AI can be trusted (…) In addition, by setting the standards [del sector] we can pave the way to ethical technology around the world and ensure the EU remains competitive. ”The move comes at a time when China is gaining ground in the artificial intelligence race.

The regulation, which establishes fines for companies that fail to comply with the rule of up to 6% of their global annual revenues or a maximum of 30 million euros (depending on the higher figure), also proposes to impose transparency obligations on certain systems artificial intelligence such as bots. In this case, the standard indicates that users must be notified that they are interacting with a conversational robot.

The Commission expressly excludes artificial intelligence systems used for military use from the standard.

The authorities plan to create the European Artificial Intelligence Board, made up of the supervisory authorities of the Member States, the Commission, and the Community Executive itself, which will be in charge of monitoring compliance with this regulation and will issue recommendations on the use of this technology. .

.

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here