X
Innovation

Should autonomous vehicles protect individuals or the greater good?

Automotive regulators will need to define how self-driving cars should handle ethical dilemmas. A new study reveals public opinion is altruistic but impractical.
Written by Kelly McSweeney, Contributor
autonomous ethics

Image via MIT, courtesy of the researchers

As self-driving cars hit the streets, classic ethical dilemmas arise. Autonomous vehicles (AVs) will face tough choices, such as whether to run over pedestrians or to sacrifice the car's passengers by swerving to save the pedestrians. Policy makers will need to set rules for how a vehicle's algorithms will make potentially fatal decisions.

A team of researchers polled the public to find the popular opinion on self-interest vs. greater good. Most people expressed views that were paradoxically both altruistic and selfish.

In a paper that published in Science titled "The social dilemma of autonomous vehicles," the researchers write, "We found that participants . . . approved of utilitarian AVs (that is, AVs that sacrifice their passengers for the greater good) and would like others to buy them, but they would themselves prefer to ride in AVs that protect their passengers at all costs."

Iyad Rahwan, an MIT professor who coauthored the study, tells ZDNet:

Asimov's First Law of Robotics says that a robot may not injure a human being or, through inaction, allow a human being to come to harm. But what happens when the robot has to choose between two groups of human beings? And what happens when one of those humans is the owner of the robot? This is the point where AIs face moral tradeoffs, something humans do, and struggle with, all the time.

Most of the survey participants said that if necessary, Artificial Intelligence should sacrifice passengers in order to minimize total harm. However, the same people also said that they would be less likely to purchase a driverless car that followed those noble principles.

According to Rahwan, "This is the classic signature of a social dilemma: we want to live in a world in which everyone chooses what is in the public good, but we are happy to make an individually selfish choice. But if everyone thinks this way, we lose the public good."

Now that we know public opinion self-driving cars, how should that shape policy? Perhaps not at all. Even though most people favor AVs with utilitarian algorithms as a concept, they have also admitted they would not want to ride in those cars. If robotic systems are safer than human drivers, the best policies are those that encourage more people to embrace autonomous vehicles. Rahwan says:

If regulators require car manufacturers to implement utilitarian algorithms, they may inadvertently cause more deaths on the road by delaying the adoption of the safer technology. So if regulators were truly utilitarian, they would not impose such regulation, since most accidents will not involve ethical tradeoffs, and so most accidents would be avoided by simply having more people opt into a driverless future.
Editorial standards