Watch: How AI shapes human morality

As part of our Campaign for Social Science events programme, last week we held a webinar featuring Professor Iyad Rahwan discussing various behavioural experiments that explore how artificial intelligence (AI) can shape human morality.

Chaired by Professor Bobby Duffy FAcSS, this was the first webinar in a series of events taking place this year, in partnership with the UK Evaluation Society and the Social Research Association, on the theme of how we can evaluate, understand, and manage different aspects and uses of AI as it continues to rapidly change our economy and our society.

Iyad, who is Director of the Max Planck Institute for Human Development, began by explaining that humans and machines mutually influence one another and emphasising the importance of social scientists and computer scientists collaborating in this space to explore new questions.

He went on to use the example of the self-driving car to illustrate how humans influence machines and vice versa. Iyad suggested that the self-driving car gives us new moral capabilities or affordances, in that it enables us to make moral decisions that previously were not possible. He commented that self-driving cars can assess emergency situations in milliseconds and act in pre-programmed ways to respond, whereas humans most often can’t process everything quickly enough and either swerve or brake and hope for the best.

Iyad said, “We could still decide to programme the car to behave randomly like a human would, but we now have this new superpower to do something different, to do something else, to do something perhaps better if we consider that so.”

Iyad explored the moral questions this poses. He referred to the ‘Trolley problem’ and explained that his team have conducted similar studies to understand people’s views on how self-driving cars should respond to emergency situations.

He said, “The summary of our findings is that people want driverless cars to sacrifice the passenger even to save more [pedestrian] lives, except my car. So, if they [individuals in the study] think of themselves as citizens, they have one opinion, but if they think of themselves as consumers, they have a different opinion.”

Iyad then went on to discuss further moral dilemmas posed via the Moral Machine experiment which collated over 100 million decisions from people worldwide. The scenarios asked people to make decisions over sparing animals or humans, young lives or older lives, and to make value judgements based on perceived status differences between individuals.  The results vary hugely, with cultural differences also playing a part.

He said, “I think this [study results] shows that we need to figure out our own ethics and we need to figure out our own values and we can’t just ask people to vote over how we should programme these autonomous vehicles. It’s something that requires a legal approach and a constitutional approach that preserves human rights and fundamental dignities.”

Iyad explained that it wouldn’t be realistic for one country or manufacturer to decide on the moral programming for all machines, and used the findings of behavioural studies which highlighted that cultural differences seem to impact on people’s moral viewpoints.

Iyad explored how human morality appears to change when delegating tasks to machines. He referred to studies which have shown that people are more likely to cheat if they can delegate the process of cheating to a machine – highlighting how machines can influence our moral choices. However, Iyad also highlighted that the means through which participants delegate tasks to a machine matters in the level of cheating that takes place, for instance comparing pushing a button to telling a machine what to do using language.

He said, “Now that machines are more human-like […] People seem to have an inhibition against asking those machines in natural language to cheat more than they would ask a human delegate.”

Iyad ended his presentation with a call for more planning, suggesting that we spend more time trying to predict what the social outcomes of implementing new technology might be, enabling us to build in regulations, safeguards and appropriate moral programming in advance to limit any unintended negative consequences.

“We invented social media and then we discovered all the problems that it can cause, all the spread of fake news, the polarisation, political polarisation that it facilitates […] And I always wonder, if we went back before the invention of Facebook, Twitter and these platforms, could we have done studies in the lab perhaps, in more controlled settings, where we imagined such a technology and where we presented some recommendations for how to better design them or how to regulate them, for instance, so that we have a bit more time to think about these questions rather than just merely reacting.”

Watch the full webinar below.