- Automated Vehicles
- Machine Ethics
- Intelligent Transportation Systems
- Automated Driving
Why a Global Conversation on Automated Vehicle Ethics Is Necessary
The only thing more difficult to solve than the inherent technical issues associated with automated vehicles (AVs) may be the moral dilemmas the technology could present. A global survey by researchers at the MIT Media Lab revealed that ethical and cultural differences between countries could make the rollout of AVs more complicated than previously thought.
The researchers launched the Moral Machine platform in 2016 to capture opinions on what AVs should do in hypothetical moral dilemmas, often referred to as the trolley problem. This survey gathered nearly 40 million decisions in 10 languages from millions of people across 233 countries and territories. The figure below shows the results of the study, demonstrating that the strongest preferences are for sparing humans over animals, sparing more lives over fewer, and sparing younger lives over the elderly.
Global Preferences for AV Ethics
(Source: The Moral Machine Experiment)
Significant Regional and Cultural Variation Exists
It is important to note that opinions on these preferences obtained from the survey varied significantly by region, particularly among individualistic and collectivistic cultures. Three distinct moral clusters of countries were identified as part of the research:
- Western cluster (North America and most of Europe)
- Eastern cluster (many far eastern countries such as Japan and Taiwan, as well as Indonesia, Pakistan, and Saudi Arabia)
- Southern cluster (primarily Latin American countries of Central and South America)
Significant differences were found between the clusters, including:
- The preference to spare younger people over the elderly was much less pronounced in the Eastern cluster compared to the others.
- Countries in the Southern cluster showed a much weaker preference for sparing humans over pets, compared to the other two clusters.
- The Southern cluster had stronger preferences for sparing women over men and sparing fit individuals over the heavyset, compared to the other two clusters.
These findings raise a number of important questions: Can machine ethics truly be universal if values differ by culture? Should moral dilemma algorithms for AVs differ by region to accommodate cultural differences? Should AVs even be able to assign higher or lower values to different forms of human life (e.g., children over the elderly)?
Public Deliberation Is Necessary
The implications of the Moral Machine Project are far reaching and should, at a minimum, help foster a much-needed global conversation on the ethics of AVs. If these vehicles will be programmed with moral algorithms that assign higher or lower values between hitting an elderly person or child, a man or woman, a cow or a dog, then society at large must have input into these moral decisions. Public deliberation is needed to help determine whether or how AVs will be programmed to assign value to different people and situations. Citizens and policymakers need to be the ones overseeing this decision-making process, not technology vendors.
In 2017, Germany became the first country to attempt to tackle these difficult questions and propose ethics rules for AVs. Included in the proposed rules is a clause that would prohibit AVs from making any distinctions based on personal features, such as age or gender. Considering views on the importance of personal features varies by region, this seems like a smart move. The rules also state that human life should take priority over animal life, but perhaps most notably, a clear stance was not taken on whether the few should be sacrificed to save the many. This will likely be the most difficult moral dilemma going forward—passengers in the vehicle will generally favor their own safety, even if it means risking the lives of more pedestrians