Researchers publish results of The Moral Machine
If a self-driving vehicle is driving along the road and something – a child, an adult or an animal – suddenly steps out in front of it, what should it do? Should it swerve to avoid the pedestrian (or animal) but therefore injure or kill its passengers, or should it preserve its passengers and harm or kill the pedestrian (animal)? This ethical dilemma is a central challenge for manufacturers of self-driving vehicles and policymakers.
This is the reason why researchers of the Massachusetts Institute of Technology (MIT) conducted an experiment (The Moral Machine experiment) about people’s choices when it comes to such situations. Published by Nature: International Journal of Science, the results reveal that no matter which country or demographic class respondents came from, there is a preference for saving human over animal life. Additionally, people would save a larger amount of people instead of fewer and younger rather than older lives. The four most spared characters in the game were a baby, a little girl, a little boy and a pregnant woman.
The results seem to be intuitive and obvious but the implementation of these preferences into the autonomous driving software is delicate. When it comes to comparing the value of human life based on certain attributes (gender, age, social status), which needs to be done in the shortest amount of time possible, the software needs to rely on drawn boundaries around these attributes. For example, the software needs to define whether a person is considered “old” or not or whether it needs help or not, which is not an easy task on a global scale.
The Moral Machine experiment has been running since 2016 and engaged 2 million participants from 233 countries, dependencies or territories. In total, the MIT received almost 40 million responses.