“The way that we should be designing technology is that it should maximise our freedom without curtailing our control.”
As part of our Campaign for Social Science events programme, last week we held a webinar featuring Professor Chris Summerfield discussing the different ways in which AI is being deployed and the risks this poses in terms of its ability to influence people.
Chaired by Professor Slava Jankin of the University of Birmingham, this was the first webinar this year in our series, in partnership with the UK Evaluation Society and the Social Research Association, on the theme of how we can evaluate, understand, and manage different aspects and uses of AI as it continues to rapidly change our economy and our society.
Chris, who is Professor of Cognitive Neuroscience at the University of Oxford and Research Director at the UK AI Security Institute, began by stating that the views we hold about AI are incredibly polarised, and that this masks several fundamental questions about the technology revolution. He went on to explain how AI has implications for a range of sectors including democracy and freedom, the information ecosystem, economic prosperity and equality, labour market dynamics, crime and social stability, and more.
Chris described a core challenge that has emerged from the development of AI models which human users can delegate tasks to, namely the idea of ‘AI alignment’, which has two associated challenges. The first is the need to develop training pipelines which result in desirable behaviour by the AI model, but the second is identifying what these desirable behaviours are in the first place.
He said, “I think the immense challenge that AI poses for us today is it really brings this question to the fore: what is it that we want from technology and how can we achieve it?”
Chris went on to explore how this is a difficult question to answer because preferences and values that people hold differ not only between each other but also change over time for individuals too.
He explained concepts including agency, influence, intelligence, freedom and control and how these are intertwined with how empowered or not we feel. He then linked this across to AI development and use, arguing that we need to think about how technology can empower us with agency, and not rob us of it.
He said, “Many of the significant societal shifts that we have experienced over the entire history of humanity, dating right back into our pre-history, can be thought of as exchanging freedom and control in some way. So many of the changes, what they did was increase our ability to do more, so we could experience more things, do more things, in part because technology allows us to go places and achieve things that we couldn’t do before. But very often that came at a cost of the range of degree of influence that we as individuals had over our personal world.”
From this Chris highlighted how we are now in a digital revolution and through this we are now ceding control to technology as it provides us with new ways to interface with each other, and the systems and infrastructure that govern our lives, creating a more complex system in which it is increasingly difficult for individual citizens to have influence. He pointed to examples of social media and smartphones, alongside AI, as technologies that enable us to do more but that this can also lead to gradual disempowerment as we delegate more to these technologies, leading to reduced autonomy and, in some cases, which can impact our wellbeing.
He said, “We’re living through a new era that requires a new response and a fresh look at the ways that technology potentially disempowers us.”
Chris then went on to share the findings of a recent study which uses consumer usage data with AI chat bots to understand whether trends of disempowerment were already taking place. The study identified that AI systems could lead to reality distortion, value distortion and action distortion, and that this was identified as mild and moderate cases once in every few hundred conversations, and that severe cases were identified one in every thousand conversations, in the data.
He said, “The leading consumer chatbots have 900 million global users every week. So, 1 in 1,000 of them [conversations with chat bots] means that a very large number of them bear the hallmarks of these types of disempowerment.”
Chris went on to further unpick what it might mean for AI to disempower us. He explored the idea of manipulation and pointed to studies that he has conducted to understand whether AI can be deployed to manipulate, and therefore disempower us, through the lens of persuasion, deception, advice following and relationships with AI. The results from these studies showed that AI could be highly persuasive, extract information from users, that people were prone to following AI advice and that relationships with AI can potentially lead to self-reinforcing attachment dynamics.
Chris ended his presentation by saying he believed that we were between two waves of technological development – the first developing models that talk to you and the second the development of AI models or agents that can act for you. He ended though on a positive note, highlighting that mitigation of disempowerment meant training models to empower people with agency, maximise learning and promote cooperation.
He said, “I think that whatever is going to happen in the future we need to think carefully about how we can train AI systems in a way which maximises human agents and allows us to maintain autonomy over our values, over our actions, so that we are not further disempowered by AI technologies.”
A Q&A followed Chris’ presentation, and the full webinar is available to watch below.