Study: Self-driving cars could soon make human-like moral decisions
It may soon be possible for self-driving cars to make life-and-death decisions just like humans. So far it was assumed that it would be impossible to create a mathematical model/algorithm for self-driving cars to take such decisions as they are context driven. However, scientists at University of Osnabruck in Germany have developed a mathematical model of morality for future robots.
How did scientists manage this?
Researchers conducted a study in which participants were asked to drive a car on a foggy day in a typical suburban neighbourhood. Participants would then experience sudden and unavoidable dilemma situations involving inanimate objects, animals and humans and decide who should be spared. The results were then analyzed and conceptualized into statistical models which would program machines on how to take moral decisions.
'Value-of-life-based model' dictates how to deal with unforeseen situations
"Human behaviour in dilemma situations can be modelled by a rather simple value-of-life-based model," said Leon Sutfeld from the University of Osnabruck. He said participants assigned a certain value to humans, animals, or inanimate objects which dictates how they will deal with an accident-related situations.
Need for defined rules to dictate actions of robots
Researchers said since it would soon be possible for robots to make human-like moral decisions, it's important for society to debate about certain rules. For example, should robots imitate humans while making moral decisions or behave as per fixed ethical theories? It's important to have these rules as otherwise machines would start making decisions without being controlled by humans. Erm... Terminator anyone?