A new study, published in Science yesterday set out to reveal how we really feel about how autonomous vehicles (AVs) should respond to a no-win traffic accident situation.
2,000 US residents were asked several hypothetical questions to gauge their moral attitude towards AVs.
When asked simply if AVs should be programmed to sacrifice one passenger rather than kill ten pedestrians, 76% of respondents agreed that this would be the best outcome. The same group overwhelmingly agreed that AVs should be programmed in this way – to minimise the number of casualties.
When only one pedestrian’s life was at stake, however, respondents thought it was best for the AV to protect the passenger’s life.
The percentage of respondents who preferred to protect the passenger’s life decreased as the number of pedestrians increased.
So far, so good – right? This all seems easy enough to follow…
The pattern found in the first two studies began to skew when participants were asked to imagine themselves and another passenger. The largest increase in those who would act in self-interest was found when the other passenger was a family member.
Despite this slight shift, the results did remain largely the same.
The main discrepancy came when participants were asked if they would purchase an AV that was programmed to minimise the number of casualties, rather than protect the passengers.
When it came to actually buying the car as opposed to just imagining the situation, participants expressed a keener interest in protecting the passengers as opposed to saving pedestrians. (Even if the number of pedestrians was as high as ten or 20.)
The results showed that when it came to crunch time, participants would rather buy a car that protected them, despite agreeing that it would be more moral to protect the majority.
It would appear that people praise cars that protect the majority of casualties – but they wouldn’t want to buy one.
The writers of the study argue:
‘This is the classic signature of a social dilemma, in which everyone has a temptation to free-ride instead of adopting the behaviour that would lead to the best global outcome.’
This is sure to cause huge dilemmas for car manufacturers. When consumers are tempted to act against the most ethical outcome, and would rather buy a car that acted in their self-interest – what would the manufacturer do? Would they be moral, or sell more cars?
Surely the way around this is to keep the moral decisions away from the manufacturers and consumers, and let governments decide how the AVs should be programmed?
Perhaps, but the consumers don’t agree.
Participants of the study were a third less likely to purchase an AV with government regulation than without.
Shariff, one of the academics who conducted the survey, said:
‘Having government regulation might turn off a large chunk of the population from buying these AVs, which would maintain more human drivers, and thus more accidents caused by human error. By trying to make a safer traffic system through regulation, you end up creating a less safe one, because people entirely opt out of the AVs altogether.’
The authors of the study conclude that this is ‘one of the thorniest challenges in artificial intelligence today’, and they call for further academic study in the area to move the conversation forward.
Driverless cars are undoubtedly one of the most exciting technological innovations of our time.
Huge companies from Google to Tesla are making strides to develop their AVs and even the Queen is getting behind them.
However, there are clearly still some very important questions to be addressed, and it looks like it may be a while yet before we’re ready to put our lives and those of others in the hands of a pre-programmed algorithm.