Watch: Priorities for action on AI: what does the social science evidence say?

With the UK Government’s recent AI Safety Summit taking place in November and the accompanying Bletchley declaration, the politics of artificial intelligence looks set to be a critical theme for the next parliamentary term. But what does this mean for AI use, deployment and regulation?

As part of our Campaign for Social Science’s ongoing project, Election 24: ideas for change based on social science evidence, we recently held a webinar featuring Professor Kaska Porayska-Pomsta and Professor Fraser Sampson to discuss the implications of AI use and regulation for future policy.
In the webinar, run in partnership with Sense about Science and chaired by Tracey Brown OBE, both Kaska and Fraser offered their insights and perspectives using examples from AI use in education and policing and security.

Kaska began by outlining what AI is, defining it as a special form of technology which is designed to act as an autonomous agent which can act in, adapt to and learn from its environment, with examples ranging from technology used in space exploration to those which act as educational tutors.

She went on to use the field of AI in education research as an example of the evidence that can be used to effectively design and deploy AI in education settings so that it is valuable to society. Kaska explained how lifelong learning, which stretches across different ages and contexts, is the focus of the AI in education research field, which has developed over the past 50 years. She said, “AI has gone through quite an extraordinary journey of first trying to faithfully replicate human teaching and learning practices in order to scale up best educational practices to a field which has explored new approaches to supporting learning through AI, which has led to some critical exemplars of new ways to nurture learners’ autonomy, self-regulation and self-actuation amongst other things.”

Kaska then went on to explore some of the examples of the evidence this field of research has produced, highlighting how the field does not just engineer products but also offers a way to study human learning and development. She said, “We have evidence, for example, of intelligent tutoring systems and adaptive learning environments, so environments which are driven by AI adaptive capabilities, that shows that these systems are at least as effective in many cases in improving learning outcomes in one-to-one tutorial settings, in particular in well-defined domains such as STEM subjects.”

To summarise she argued that it was critical to draw on different research fields and the knowledge and evidence from different stakeholders in AI to inform responsible AI policy and decision making.

Fraser began his remarks with the example of how unregulated AI has led to industrial action taking place in Hollywood. He then moved on to explore how existing legal frameworks and policy are attempting to fit around AI, using several examples where AI driven devices may offer potential uses for reducing crime, and are being put into practice in the judicial system. Fraser said, “What is the basis for believing that our prehistoric concepts of copyright or guilt can be adapted to meet the peculiar challenges of an AI world?”

As he pointed out, AI driven devices can capture and store huge amounts of data and are not prone to memory loss over time but currently, in western jurisdictions, a witness in court must still be a human being. Fraser said, “AI capability is existentially new and doesn’t something existentially new call for an existentially new form of regulation and governance?”

He went on to discuss what AI regulation, and its associated risks, could look like, using examples from crime and policing to illustrate ideas for regulatory changes and whether society would be comfortable with using AI to support the criminal justice system.

Finally, Fraser ended on three things to remember when considering AI regulation: that AI’s learning strategy is confirmation bias, to not mistake regulation for ethics, and that AI accountability also requires societal input, which is a matter of politics.

As he pointed out, “How are we to regulate a world where we can have and be, everything, everywhere all at once?”

Watch the recording below to hear more from our speakers.

YouTube

By loading the video, you agree to YouTube’s privacy policy. Learn more.

Load video

PGlmcmFtZSBzcmM9Imh0dHBzOi8vd3d3LnlvdXR1YmUtbm9jb29raWUuY29tL2VtYmVkL0dzMEUySTIteVdJP3NpPTlGWl9NSWtacE5NQkI2cFYiIGZyYW1lYm9yZGVyPSIwIiBhbGxvd2Z1bGxzY3JlZW49ImFsbG93ZnVsbHNjcmVlbiI+PC9pZnJhbWU+

Priorities for action on AI: what does the social science evidence say? is part of Election 24: Ideas for change based on social science evidence, a Campaign for Social Science project which draws on a range of social science research to suggest evidence-based social policy directions ahead of a UK general election in 2024.

Find out more