What drives decisions by autonomous vehicles in dire situations?
Despite dramatic reductions in accident-related fatalities, injuries and damages, as well as significant improvements in transportation efficiency and safety, consumers aren’t as excited about the promise of autonomous vehicles as the auto industry is. Research shows that people are nervous about life-and-death driving decisions being made by algorithms rather than by humans. Who determines the ethics of the algorithms?
Bill Ford Jr., executive chairman of Ford Motor Co., said recently that these ethics must be derived from “deep and meaningful conversations” among the public, the auto industry, the government, universities and ethicists.
Azim Shariff, Assistant Professor of Psychology & Social Behavior at the University of California, Irvine, and his colleagues – Iyad Rahwan, Associate Professor of Media Arts & Sciences at the MIT Media Lab in Cambridge, Mass., and Jean-Francois Bonnefon, a Research Director at the Toulouse School of Economics in France – have created an online survey platform called the Moral Machine to help promote that discussion.
Launched in May, it has already drawn more than 2.5 million participants from over 160 countries.