Virtual assistants, employee selection processes or insurance contracting. Artificial intelligence (AI) applications are part of everyday life for citizens and advances are usually accompanied by warnings about possible misuse of this technology. The European Union took a step forward and a year ago presented a proposal for a pioneering regulation in the world, which divides AI technologies into four categories based on the risk they may pose to citizens. But some experts point out that there are complex applications that, in their current wording, could be left out of regulation. Health, autonomous cars and weapons, among others.

The EU debates the last fringes of the regulations on AI, which could be ready in 2023. A regulation that is “unique in the world” due to its characteristics, although it leaves important aspects in the shadows, says Lucía Ortiz de Zárate, Researcher in Ethics and Governance of Artificial Intelligence at the Autonomous University of Madrid. Ortiz de Zárate has submitted, together with the Fundación Alternativas, comments on the Commission’s proposal. Some of them have been included in the latest version of the proposal. Others don’t.

This researcher misses the fact that there are sensitive sectors that are not included in the most closely watched artificial intelligence classifications, as is the case of health. “There’s a long list of apps and healthcare doesn’t show up in any of them,” she says. She only mentions that the standard will ensure that the technologies “do not pose a risk to health.” But applications that, for example, use health, public health, or medical data are not collected.

Another of the best-known uses of artificial intelligence, that of autonomous cars, is also not mentioned in the standard, which would require more transparency and greater control by the authorities. Ortiz de Zárate acknowledges that there is a “complex debate” between how much should be regulated and how much room should be left for innovation to allow progress. Nor is any mention of autonomous weapons included in the regulation, whose operation is also based on artificial intelligence.

One of the comments on the European standard that has been taken into account refers to the definition of artificial intelligence, explains the researcher from the Autonomous University. The concept of AI at one point left out the hardware and focused only on the software. “It was very dangerous to leave it out because a lot of bias issues stem from the hardware.” Ortiz de Zárate gives as examples voice assistants such as Siri, Alexa or robots, which have physical support and are “sources of discrimination because they perpetuate harmful stereotypes for women” such as the “care role”. Following comments, the text has re-included hardware within the definition. The concept of artificial intelligence “does not depend so much on what techniques are used, but on the functionalities,” she says.

In addition, the researcher believes that the low-risk application standard should go beyond a mere code of conduct. “These are the applications that citizens use the most”, so we should try to put an end to biases such as gender bias “which seem minor but are frequent”. They can, therefore, suppose a “strainer of stereotypes” if they are not stopped.

Risk-based approach, with four levels

  • Inadmissible. The artificial intelligence regulation will prohibit a limited set of its “particularly harmful” uses, that is, those that contravene the values ​​of the Union by violating fundamental rights. For example, abuses such as social scoring (social scoring by governments, a practice carried out in China), or the use of children’s weak points for some commercial purpose.
  • High risk. They are problematic applications that can potentially violate human rights, but their justified use is understood in certain situations. The EU legalizes them under very strict requirements. Among others, this group includes biometric identification, migration and border management or water and gas supply applications.
  • Limited risk. This group includes those AI systems that, without posing a high risk to citizens, do impose specific transparency obligations on them when there is a clear risk of manipulation. In particular, with the use of conversational robots.
  • Low risk. All other AI systems can be developed “under applicable law” without additional obligations. Lucía Ortiz de Zárate focuses on this group, which “should have a little more regulation.” For example, sanctions should be included if there are repeat offenders or in the case of “offensive” representations.



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here