Artificial intelligence is already present in many aspects of our daily lives, some of them very relevant, and if it is not used correctly it can have undesirable consequences. For example, the perpetuation of stereotypes or bad social practicess, such as discrimination against minorities, races or genders, the opacity of some of its algorithms or the excessive autonomy given to these systems. This is what Idoia Salazar believes, president of OdiseIA, the observatory of the social and ethical impact of artificial intelligence, which yesterday presented in Madrid, together with PwC, Google, Microsoft, IBM and Telefónica, the first guide to good practices for the use of artificial intelligence developed in Spain. It has been done with the collaboration of the Secretary of State for Digitization and Artificial Intelligence.
Said guide, as its authors pointed out, is the first result of an initiative that aims to generate an ecosystem where any organization can join to share and learn about the best practices in the use of AI in accordance with ethical principles and regulatory precepts. The 232-page document has been prepared by a multidisciplinary team of more than 30 professionals (made up of technologists, lawyers and experts in different fields from the aforementioned companies) and includes a detailed study and legal analysis of the ethical principles that apply to artificial intelligence, based on the analysis of 27 initiatives around the world. In addition, these concepts are grounded in the day-to-day activities of companies, including the technologies, tools and recommendations from Google, Microsoft and IBM, as well as Telefónica’s experience in this area.
The guide comes at a key moment, after in April 2021 The European Commission will present its proposal for an EU regulatory framework on the use of artificial intelligence (AI Act). A bill that aims to establish a horizontal regulation on AI, and once approved, all companies must comply with it in less than two years.
It also comes after UNESCO published a report on the ethics of AI in November 2021, which called for the creation of policies and regulatory frameworks to ensure that these emerging technologies benefit humanity as a whole. And after the Government published the National Artificial Intelligence Strategy (ENIA) in December 2020devoting one of its six axes entirely to the need for ethical and normative artificial intelligence.
According to the promoters of the guide, there is already a glimpse of the need to be prepared for when the obligations established by law come into force, and this document can help companies to achieve it.
“The ethical and responsible use of artificial intelligence is an obligation for any company, and in the coming years, with the application of specific regulations, there will be no possibility of not paying attention to this issue. We believe that the contribution of the guide is essential to generate community and to articulate a space in which we all contribute from different points of view to the development of a solid framework supported by the use of good practices”, indicated Armando Martínez Polo, partner responsible for technology at PwC Spain.
As pointed out by OdiseIA, a non-profit organization, and PwC, the guide will be somewhat dynamic and the next step will be to apply it to the different sectors of economic activity. Thus, this second phase of the project has already begun to adapt it to ten business sectors with the help of more than 50 companies, starting with insurance, advertising and health.
For Juan Manuel Belloto, director of OdiseIA and responsible for this initiative, companies, just as they have behavior policies for their employees, have a responsibility that their AI is developed under ethical principles. “As in society, Ethics must be accompanied by legislation, and for the first time, the legislation is not waiting, a symptom of the importance in artificial intelligence”. Another symptom of the importance of this challenge, he added, is to see how companies that compete in many businesses have come together in this project to jointly address this challenge.
Richard Benjamins, co-founder of OdiseIA, pointed out that there are four main challenges that companies face when implementing ethical artificial intelligence: “Many organizations do not have this issue in mind, because they only think about the business opportunities that this technology offers them; it is necessary to bridge silos within companies and work as a team; it is necessary to choose well which principles are appropriate to apply for your sector, because it is not the same if you are an organization in the health, financial or industrial field, and it is complex to land a responsible use of AI. You have to do a lot of training and have the right tools.”
The guide lands with a very practical approach the ethical principles applicable to AI, according to Idoia Salazar. These are privacy, security, transparency, explainability, responsibility, justice, human rights, and environmental sustainability. From PwC they highlighted that in relation to the transparency and explainability (which is the basis of trust in AI) regulations are lacking. Likewise, in terms of responsibility and accountability, progress must also be made because artificial intelligence is at a very early stage “and we must know who is responsible for each of the actions: the developer, the person who devised the AI and the who put it into operation.”
Juan Murillo, area director of OdiseIA, warned, however, that the implications that the use of algorithms may have varies according to the field of application. “It is not the same to use one for optimizing distribution routes or waste collection as doing it for the prioritization of waiting lists in the health field. In the second case, it affects fundamental rights and there may be deaths if an operation is delayed too long. We must always find that point of balance between what is the good that we pursue and the cost that it has. It’s pure risk management“, said.
Murillo made a comparison and pointed out that manufacturing vehicles with airbag it’s more expensive than doing it without, “but we all appreciate our cars having it because it’s safer.” It is also proven, she said, that driving a vehicle with a helmet increases safety, however, requiring this measure would have social costs because it would be uncomfortable to do so and people would rebel. For this reason, he added, a balance point must be found between the increase in security with the new regulation and the additional costs that this implies. “Developing algorithms that are well documented, unbiased, and explainable have a costbecause it will take longer to develop them and put them into production, so demanding that from all the algorithms that are developed would be inefficient and would hamper innovation,” he said.
Patricia Manta, from PwC, defended, for her part, that artificial intelligence has made people and companies more aware of many biases that we would not have discovered without its use. And she gave the example of discrimination in the granting of loans, discovered from evaluating the AI used for it. “All humans have biases, so to the extent that we are able to create artificial intelligence solutions that are proven to be efficient in that sense, it would be reasonable to heed the recommendations that those solutions make to us and not to humans. . Well conceived, designed and used, artificial intelligence can be a very good tool to surface injustices, risks and not repeat them,” he stressed.