In this guest feature Rebecca Jenkins, Ruben Lamers James, and Anne Hausknecht from Swansea University discuss the increasing prevalence and danger of ‘deepfakes’ and emphasise the vital role of multidisciplinary social science in helping us to understand and mitigate the risks posed.

Assessing the Impact of Deepfakes on Trust in Evidence
Ever since they first appeared in a Reddit sub-thread in 2017, ‘deepfakes’ – synthetic media created or manipulated using artificial intelligence (AI) – have become much more realistic-looking, more prevalent, and harder to detect. Some authors ( e.g. here and here) have warned of the impending “infocalypse”, bringing about an era in which seeing is no longer believing. At the same time, “user-generated evidence” (i.e., information captured by ordinary users through their personal digital devices and used in legal adjudication) is increasingly being used as evidence to pursue accountability for atrocity crimes. As we have noted, one of the biggest dangers surrounding deepfakes is not that they could end up as evidence in court, but rather that real footage might be dismissed as possibly fake. But how aware are people of deepfakes and how good are we at detecting what is authentic and what is real? How do judges, lawyers, and jurors deal with the increasing prevalence of deepfakes? Can we still trust digital evidence in an era of deepfakes?
To answer these questions, it will require a joint effort of different social sciences disciplines. An example of such an effort is the TRUE Project, which combines linguistics, psychology, and law to explore the impact of deepfakes on trust in user-generated evidence. In this post, we outline some of the key considerations with regards to user-generated evidence in an era of deepfakes.
How good are we at spotting deepfakes?
Psychological studies examining the human ability to detect deepfakes increasingly indicate two main issues: competence and over-confidence. Not only are we bad at differentiating deepfakes from real content, but we are also falsely confident that we can detect the difference. As deepfake generation capabilities improve, we will become increasingly reliant on other sources to differentiate real from fake content, such as AI-detection models, which may already surpass human detection abilities. Unfortunately, we are less likely to take advice when we are overconfident in our abilities. Reducing our overconfidence may be imperative in making us less susceptible to deepfake-based misinformation.
Our study – which investigated whether providing feedback on participants’ performance can reduce this overconfidence – found mixed effects, with no improvements detected for detection accuracy or confidence-calibration measures (the discrepancy between confidence and accuracy). However, a small decrease in confidence was observed in one of the feedback conditions vs the control. The study also found that, following feedback, participants became more skeptical that subsequent videos were deepfakes than those in the control condition. This finding highlights the importance of ensuring that interventions do not also lead to unintended backfire-effects that may further induce distrust of authentic content.
The impact of deepfakes on criminal trials
Robert Chesney and Danielle Citron have warned that, as deepfakes become more prevalent, it becomes easy to dismiss every piece of (authentic) evidence as fake. The authors call this the “liar’s dividend”. So far, there have only been a few cases in which the liar’s dividend was – albeit unsuccessfully – used. Examples include the criminal trial of Guy Reffitt, who was one of the insurrectionists during the January 6th US capitol hill storming in 2021. Further examples include the case of ‘cheerleader mom’ Raffaella Spone, falsely accused of creating deepfakes of her daughter’s teammates, and a case against Tesla for the failure of a semiautomatic vehicle, where it was claimed that an interview with Elon Musk might have been a deepfake.
The TRUE project is currently working on a database of domestic and international cases featuring user-generated evidence. This will be available on the project website from June 2025. So far, with regards to accountability procedures for mass atrocities, we have not seen the liar’s dividend being employed – yet. But as we see more and more deepfakes being circulated in the context of the Ukraine/Russia, and the Gaza/Israel conflicts, the liar’s dividend may well become a popular strategy in upcoming trials.
Trust in user-generated evidence
To examine how laypeople assess and weigh user-generated evidence in a realistic scenario, the TRUE project held a mock jury trial exercise in September 2023. A fictional criminal trial involving a (real) video of an airstrike in Yemen was recorded and shown to mock jurors, whose deliberations were transcribed and linguistically analysed to uncover laypeople’s perceptions of user-generated evidence.
Through linguistic analysis of the discourse surrounding ‘trust’, social science research informs us that juries generally perceive user-generated evidence to be authentic, owing to the trustworthiness and credibility of expert witness testimony. The notion of ‘trust in experts and science’ has received increased attention within the professional contexts of jury trials and policymaking. Research from the TRUE project also tells us how concerns around deepfakes and manipulation are not central to juries’ evaluation of the evidence. Instead, the main discussion topic includes the sufficiency and strength of evidence to prove a defendant’s guilt beyond reasonable doubt. Moreover, despite accepting expert analysis of video evidence, juries express doubts about relying on it due to the creator of the video being unavailable to testify. Therefore, the biggest concern we are seeing from this study’s data is not deepfakes, but the trustworthiness of the source of user-generated content.
Can we still trust digital evidence in an era of deepfakes?
Audio-visual manipulation is not a new phenomenon, and judges, lawyers and juries have been challenging the adage of “seeing is believing” for close to two centuries. Abraham Lincoln’s head was famously “photo-shopped” on the slightly more imposing body of Southern politician John Calhoun in the 1860s, and the first reported trial on photographic manipulation took place in 1869. Every advance in photography, film, and audio recording was quickly followed by the emergence of means and technologies to manipulate content. Yet, deepfake technology has significantly enhanced the speed and volume at which fake content can be created, and human and machine capabilities in spotting deepfakes are lagging behind. A similarly troubling phenomenon is that of misattribution – such as the video purporting to show the Israeli Embassy in Bahrain on fire or this picture claiming to show a demonstration in Canberra, Australia.
To address the challenges outlined in this piece, it will require cooperation among different social sciences disciplines. Legal practitioners should start preparing for the increasing prevalence of deepfakes and misattributed content and ensure that authentication measures are in place to prevent fake evidence from being tendered. Initiatives such as the Coalition for Content Provenance and Authenticity (C2PA) are invaluable in ensuring the provenance of user-generated content by providing a way to check the provenance of audio-visual materials. Media literacy and awareness of deepfake technology and detection tools are key. In addition, civil society actors and investigators should employ robust and transparent methodologies when collecting user-generated evidence and thoroughly document the chain of custody (i.e., the documentation of how a piece of evidence was obtained and has been dealt with). Experts on deepfake technology will be in high demand, and steps should be taken to clarify what makes an expert on user-generated evidence and deepfakes, and to train the next generation of experts. With those measures in place, user-generated evidence can continue to play an important role in criminal trials.
About the authors
The authors are Rebecca Jenkins (PhD Candidate in Applied Linguistics, Swansea University), Ruben Lamers James (PhD Candidate in Psychology, Swansea University), and Anne Hausknecht (PhD Candidate in Law, Swansea University). You can find out more about them on the TRUE project website. All three are researchers on the project.
Image Credit: Clint Patterson on Unsplash