It’s time to think about the potential risks of artificial intelligence, instead of after something happens. This free online course about AI ethics is a good starting point.
Artificial intelligence (AI) is here, and its benefits are seemingly limitless. There is a flip side though–there always is. AI experts, and those involved with AI, are concerned that if we do not proceed with caution, some of the strange things predicted in science-fiction movies such as 2001: A Space Odyssey may be more truth than fiction.
Elon Musk told The New York Times that his experience with AI at Tesla allows him to say with confidence, “We’re headed toward a situation where AI is vastly smarter than humans.” He adds, “That doesn’t mean everything goes to hell in five years. It just means that things get unstable or weird.”
Jonathan Shaw, in his Harvard Magazine article “Artificial Intelligence and Ethics,” intimates that it can be much more severe than unstable or weird. He writes about how a self-driving car killed a woman in Tempe, AZ. There was a person behind the wheel, but the car’s autonomous system–artificial intelligence–was in full control.
“This incident, like others involving interactions between people and AI technologies, raises a host of ethical and proto-legal questions,” continues Shaw. He then asks who is responsible for the pedestrian’s death?
- The person in the driver’s seat?
- The company testing the car’s capabilities?
- The designers of the AI system, or even the manufacturers of its onboard sensory equipment?
SEE: Artificial intelligence ethics policy (TechRepublic Premium)
In his Stanford Encyclopedia of Philosophy paper “Ethics of Artificial Intelligence and Robotics,” Vincent C. Müller begins by referencing technologies, such as nuclear power, that had (and still have) substantial ethical implications, and what has been done to control the trajectory of that specific technology. Müller adds, “The ethics of AI and robotics has seen significant press coverage in recent years, which supports related research, but also may end up undermining it: the press often talks as if the issues under discussion were just predictions of what future technology will bring, and as though we already know what would be most ethical and how to achieve that.”
Müller believes current deliberation is missing the point. “The result is a discussion of essentially technical problems that focus on how to achieve a desired outcome,” explains Müller. “Current discussions in policy and industry are also motivated by image and public relations, where the label ‘ethical’ is not much more than the new ‘green,’ perhaps used for ‘ethics washing.'”
Next, Müller states what sounds like a conundrum: “For a problem to qualify as an issue of AI ethics would mean we do not have a ready solution. In this sense, job loss, theft, or killing with AI is not a problem in ethics, but whether these are permissible under certain circumstances is a problem.”
In his report, Müller looks at the following and how it relates to ethics:
- Privacy and surveillance
- Behavior manipulation
- Bias in AI decision systems
- Human-robot interactions
- Automation and employment
- Autonomous systems
- Artificial moral agents
Time to get educated about AI and ethics
Deciding what is ethical and what’s not is an immensely difficult endeavor to begin with and when you introduce technical complexity, it gets even more convoluted. With that in mind, we have a couple of choices:
- We can let the powers that be decide what our future will look like.
- We can learn about the ethics of AI and participate in the discussion.
If you are interested, an excellent place to start might be the free online course The Ethics of AI, offered by the University of Helsinki in partnership with “public sector authorities” in Finland, the Netherlands, and the UK.
Anna-Mari Rusanen, a university lecturer in cognitive science at the University of Helsinki and course coordinator, explains why the group developed the course: “In recent years, algorithms have profoundly impacted societies, businesses, and us as individuals. This raises ethical and legal concerns. Although there is a consensus on the importance of ethical evaluation, it is often the case that people do not know what the ethical aspects are, or what questions to ask.”
Rusanen continues, “These questions include how our data is used, who is responsible for decisions made by computers, and whether, say, facial recognition systems are used in a way that acknowledges human rights. In a broader sense, it’s also about how we wish to utilize advancing technical solutions.”
The course, according to Rusanen, provides basic concepts and cognitive tools for people interested in learning more about the societal and ethical aspects of AI. “Given the interdisciplinary background of the team, we were able to handle many of the topics in a multidisciplinary way,” explains Rusanen. “We combine computer science, cognitive science, social sciences, and psychology with philosophy to create a way to look at the complex issues created by AI.”
The course is open to anyone and does not require coding skills or particular technological expertise, but Rusanen recommends familiarity with basic AI concepts. A good starting point might be the University of Helsinki’s free online course, Elements of AI, which provides a diverse overview of the principles of artificial intelligence.
Final thoughts
Müller pulls no punches in the conclusion of his Stanford report. “AI and robotics have raised fundamental questions about what we should do with these systems, what the systems themselves should do, and what risks they have in the long term. They also challenge the human view of humanity as the intelligent and dominant species on Earth,” says Müller.