Researchers from the MIT Media Lab’s Scalable Cooperation group have surveyed over 2 million citizens from over 200 countries about morally difficult decisions that autonomous vehicles might have to make. Participants played ‘Moral Machine’, an online game designed by the researchers. In a series of potential situations on the road (mostly sudden break failures), they had to choose the lesser of two evils: for example, if it is morally more acceptable to kill a man in the car or a woman with a child crossing the street.

As well as about 40 million recorded choices, scientists gathered a lot of demographic data such as the age, gender, or education level of the participants. However, the only patterns in moral decision making they found were geographical and cultural. For example, citizens of “eastern” countries (a term defined by the researchers, referring broadly to Asia) were less likely to save young people over the elderly than participants from elsewhere.

“The most emphatic global preferences in the survey are for sparing the lives of humans over the lives of other animals; sparing the lives of many people rather than a few; and preserving the lives of the young, rather than older people,” is how Edmond Awad, the lead author of the paper published in Nature, summarised the results.

The researchers might use the data gathered in the ‘Moral Machine’ project to help engineers program self-driving cars to operate according to particular societal norms in an emergency. Moreover, public engagement is the key to building up the public’s trust in the new technology – a necessary condition to adopt autonomous vehicles on a large scale. So far “public interest in the platform surpassed our wildest expectations,” reported Iyad Rahwan, the leader of MIT Media Lab’s Scalable Cooperation group. If you also would like to browse or even create possible road situations, visit http://moralmachine.mit.edu/.