For as long as video has been around, it has played a key role in portraying the truth. Whether it’s witness mobile phone recordings, CCTV or body camera footage, video has been perceived to provide the closest representation of the truth. While there have been ways to edit video to change the context or content in the past, we now live in a time where entirely fictitious video footage can be created with just a prompt in a matter of seconds. Courts now face AI-generated videos known as deepfakes, and AI-enhanced videos known as shallowfakes.
In a recent news story, it was highlighted that courts, both state and federal, are underprepared to deal with AI-generated video evidence and requires guidelines, training, and tools to defend justice and evidential integrity. In this article, we will examine the importance of video evidence, how deepfakes are already in the courtroom, and what reforms should be put in place to preserve legitimate video evidence.
Why this matters now: video is already central to modern cases
Video evidence is integral to a large majority of cases. According to a study by the University of Colorado Boulder, 80% of court cases rely on video evidence to some extent. Due to this reliance on video evidence, we can expect the risk of deepfake videos in courtrooms to increase. Because if most cases depend on video, then most cases inherit deepfake doubt.
Deepfakes have already entered the courtroom
In September 2025, a milestone case emerged out of Alameda County, CA where the plaintiff was found to have submitted deepfake videos as testimony. The judge was able to identify clear signs that the video evidence was AI-generated and therefore issued terminating sanctions. The reasons highlighted by the judge were, “the lack of facial expressions, the looping video feed, among other things, suggested that these exhibits were products of GenAI.”
While the material submitted by the plaintiff was clearly generated by AI, it sets a dangerous precedent for future cases that plaintiffs feel confident enough to submit false material to support their case. With the advent of new GenAI tools like Sora 2, this is no longer theoretical, it’s happening right now. Moreover, GenAI is advancing at an unprecedented pace, which in turn is making it harder to determine whether digital material is legitimate or not.
The “deepfake defence” and the collapse of confidence
Not only is there now a risk of deepfake material being submitted as fact in courtrooms, a second-order risk has emerged. Genuine footage and evidence are now liable to attack, creating doubt in the justice system. This will likely lead to delays in proceedings and therefore incur higher costs to continue the trial. This is being dubbed as the “liar’s dividend”, where someone can claim evidence isn’t real, blurring the line between reality and fiction.

This will impact the litigation process in numerous ways. We can expect more authentication fights, as the burden of proof now lies in demonstrating the provenance of the submitted evidence. This will likely require experts which in turn will incur a cost, whether that be forensics or analysts to review the evidence. Jurors may struggle to keep pace with cases that require evidence authentication, particularly if evidence is called into question. Cases can now be swayed by litigators sowing seeds of doubt in the minds of jurors. This in turn will lead to pressure on judges to become equipped and capable of handling complex digital evidence, able to sift truth from fiction.
What practices are needed to ensure evidential authenticity?
To remain effective in an era of deepfakes and synthetic media, the justice system must move beyond ad hoc detection and adopt reforms that embed authenticity, provenance, and resilience into everyday evidential practice.
First, judges, jurors, lawyers, and court staff should receive structured training not simply on how to spot deepfakes, but on understanding the limits of detection, the concept of evidential provenance, and the indicators of trustworthy digital capture. As deepfakes become increasingly indistinguishable from real footage, human judgement alone will be insufficient. Legal decision makers must be equipped to evaluate process and provenance, not just pixels.
Second, following the September 2025 milestone case in which deepfake material was formally submitted as evidence, clearer and consistent standards should be established. These standards should define what forms of digital and AI enhanced evidence are admissible, and how authenticity must be demonstrated through documented chains of custody, integrity checks, and transparent disclosure of any AI involvement in capture, enhancement, or analysis.
Third, the justice system requires modern infrastructure for the storage, management, and retrieval of evidentiary video and audio. In a future where digital evidence is routinely challenged, the evidential value of a recording will depend as much on assurance of origin and integrity as on its content. Secure, tamper evident platforms, where footage is cryptographically sealed at source and accompanied by immutable audit trails, allow courts to verify provenance quickly and consistently. This reduces reliance on costly, case by case forensic analysis and enables the system to operate at scale.
Taken together, these reforms represent a shift from reacting to deepfakes after the fact to designing evidential processes that are resilient by default, ensuring that trustworthy digital evidence can be distinguished from fabrication efficiently, transparently, and credibly in court.
Practical Guidance: What legal teams and investigators can do now
Capture with integrity from the outset
Use evidential capture methods that seal video, audio, and images at source, applying time stamps and preserving original files to reduce later authenticity disputes.
Assume all digital evidence will be challenged
Treat provenance as a core evidential requirement. Be prepared to explain how material was captured, handled, stored, and accessed, not just what it shows.
Maintain a clear and continuous chain of custody
Document every transfer, access, export, and review of digital evidence. Avoid informal sharing, copying, or storage that could undermine confidence in integrity.
Standardise digital evidence handling practices
Apply consistent workflows for file naming, access control, storage, and audit logging to minimise procedural weaknesses that can be exploited in court.
Understand the role of deepfake claims
Develop baseline awareness of when deepfake allegations are credible and when they are being used strategically to create doubt or delay proceedings.
Prioritise provenance over detection
Do not rely solely on deepfake detection tools. Evidence that is verifiably authentic by design is easier to defend than evidence that must be analysed retrospectively.
Engage legal and technical expertise early
Involve prosecutors, investigators, and digital evidence specialists at an early stage to identify authenticity risks and prepare clear explanations for court.
So, what can the courtroom prove?
As highlighted in the beginning, for a long-time video has been seen as a definitive source of truth in courtrooms. That just simply isn’t the case anymore. In this era of deepfakes, the shift now must be from “does this look real?” to “can we prove the source of this evidence and what happened to it?”
Developed by Issured, MeaConnexus is a secure, tamper-evident interview platform which captures the entire audit trail for you. Offering verifiable provenance, MeaConnexus is built using blockchain technology so that you know if even a pixel has been tampered with. For images, audio, video and documents, blockchain-enabled MeaFuse offers the same tamper evident protection, available as a mobile or desktop application. Easily track, store, and manage digital evidence from anywhere, on any device.
So, the question remains, if evidential integrity becomes the differentiator, will the organisations that can prove the truth be the ones that ultimately win trust and credibility?
About Mea Digital Evidence Integrity
The Mea Digital Evidence Integrity suite of products has been developed by UK based consultancy, Issured Ltd. Benefitting from years of experience working in defence and security, Issured recognised the growing threat from digital disinformation and developed the Mea Digital Evidence Integrity Suite of products to ensure digital media can be trusted.
MeaConnexus is a secure investigative interview platform designed to protect the evidential integrity of the interview content. With features designed to support and improve effective investigations, MeaConnexus can be used anytime, anywhere and on any device, with no need to download any software.
MeaFuse has been designed to protect the authenticity and integrity of any digital media from the point of capture or creation anywhere in the world. Available on iOS, Android, Windows and MacOS MeaFuse digitally transforms the traditional chain of custody to ensure information is evidential.
Disclaimer and Copyright
The information in this article has been created using multiple sources of information. This includes our own knowledge and expertise, external reports, news articles and websites.
We have not independently verified the sources in this article, and Issured Limited assume no responsibility for the accuracy of the sources.
This article is created for information and insight, not intended to be used or cited for advice.
All material produced in the article is copyrighted by Issured Limited.
Interested in Hearing More?
To receive regular updates and insights from us, follow our social media accounts on LinkedIn for Mea Digital Evidence Integrity and Issured Limited.
Additionally, sign-up to our Mea Newsletter to receive product updates, industry insights and event information directly to your mailbox. Sign up here.
View our other articles and insights here.


group