Rutronik News

The AI comes first, followed by the ethics

  Newsletter Article

Autonomous driving is one of the hot topics in the automotive industry. For a number of years, both traditional carmakers and tech companies such as Google have been trying to get self-driving cars to the mass production stage. There were also rumors that even Apple was working on an autonomous vehicle, which probably would have been called the iCar in keeping with the tradition of the company. No marque has thus far actually produced a market-ready model. Meanwhile, there are reports of accidents involving self-driving cars – some innocuous, some fatal, some self-inflicted, and some caused by human road users. So the idea of a stream of autonomous cars soon taking over the expressways of the world remains a pipe dream for the time being.

Yet even though car accidents may one day be a thing of the past thanks to constant AI monitoring, there will always be the human factor: No AI in the world will stop children from chasing after a ball and running into the road. And even with sophisticated predictive maintenance technology, brake failure and a resulting potentially fatal situation can never be fully ruled out. In such cases, the AI must decide what to do within a matter of nanoseconds – before the comparatively slow human being can intervene and make a decision. But how is the ghost in the machine supposed to decide? Should it essentially accept the death of one person rather than several? Should it save young people who still have their whole lives ahead of them – or old people who should be respected due to their life experience? And should it spare socially respected personalities or people further down the ladder?

These questions cannot and should not be decided by AI alone. It needs a helping hand from humans. In turn, this calls for a moral compass to provide a basis on which the artificial intelligence can act.  Projects like the Moral Machine have been set up in order to develop such a compass. Here people from all over the world can assess various accident scenarios. By the end of October, more than 40 million such decisions from 117 different countries had been collated. An international team of researchers has now evaluated them – with some astonishing results. Although the Moral Machine does not provide a representative sample – not least because a disproportionately high number of young men took part compared to other groups – the study nonetheless provides some interesting insights into the cultural socialization in various countries around the world.

The researchers have sorted through the users’ decisions in a total of nine different categories. Questions include whether women should be protected more than men, whether human life should take precedence over animals, and whether fit people are more deserving of consideration than fat or ill people. At the same time, there is the option to compare two countries with one another and against the global average. Generally speaking, the global average hovers relatively undecided between the two poles. For example, the number of people saved plays a role for just 51 percent of the users who took part. The broadest global consensus exists in relation to the precedence of human life over animals, with two thirds of people inclined to save people rather than animals.

There are some interesting observations as far as Germany is concerned: For one thing, the people – who as a country still consistently resist speed limits being imposed on autobahns and don’t like being told what to do behind the wheel – tend toward practically no intervention in the decisions taken by AI: Just one in five would intervene in a dangerous situation rather than trust the artificial driver. At the same time, social status is almost irrelevant to the Germans: Less than a third of participants stated that it made a difference to them whether they ran over a doctor, fireman, or YouTuber. And life for women in Germany clearly tends to be a little more dangerous, because the users who took part were less concerned about gender than the global average when making a decision. Incidentally, the Swiss are most similar to the Germans, whereas drivers in Venezuela make entirely different decisions: It makes virtually no difference to them whether they protect people or animals, although social status is all the more important – and women can expect to be paid more consideration on the roads.

In turn, Venezuela’s decisions are not unlike those of Colombia, which suggests that countries on the same continent or within the same cultural milieu tend to be similar. This assumption is backed up by the fact that the USA, Canada, Australia, and Great Britain all made very similar decisions, which were all hugely different to those given in Brunei. Meanwhile, India and Pakistan differ most from Mongolia – conversely, the two enemy states are relatively similar when it comes to taking moral decisions on the road. Interestingly, however, the similarity between India and Sweden is in fact equally high –and even higher than the commonalities shared with the former colonial power Great Britain.

Given the noticeably different decisions taken in each region of the world, one thing is clear in any case: No consensus has (yet) been reached on any definite rules that AI may follow in an autonomous vehicle. If in doubt, the autonomous car would constantly have to update its “moral compass” depending on where it was located, otherwise it could do precisely the opposite to what is considered socially acceptable in a country where a certain section of the population is held in high esteem.

The Moral Machine shows one thing at any rate: A great deal still has to happen – not just from a technical perspective, but also in terms of values and standards – before we can really be driven to work or on vacation in a largely safe manner without having to worry. Until then, we will have to continue to trust our own judgement. And that doesn’t necessarily have to be any worse than the cold rationale of AI.