A criminal trial has been underway for two weeks, with high stakes for the defendant. The outlook appears bleak until new CCTV footage emerges, placing the defendant in a different location at the time of the alleged offence. The footage appears to show the defendant speaking with others before driving away. The defence argues the video clears the defendant. The prosecution claims it has been fabricated. Who do you believe?
The World Economic Forum’s Global Risks Report 2026 has just been published, and with it some stark predictions. The scene outlined in this article’s introduction represents the future highlighted in the Global Risks Report: disinformation and adverse outcomes of AI. Deepfakes may make it impossible to distinguish fact from fiction, and at the pace that generative AI is advancing, this could be sooner than expected. In this article we examine the short-term impact of disinformation, why the fastest growing threat is adverse outcomes of AI, and what this means for evidential integrity.
Misinformation and Disinformation
Highlighted as one of the top short-term risks, misinformation and disinformation ranks second, behind geoeconomic confrontation. The report signifies how new technological development is offering new opportunities and opening new potential business, but this comes at the cost of new risks. These risks involve information integrity and the impact to labour markets.
Global risks ranked by severity over the short term (2 years)

Source: Global Risks Report 2026
In a time when political, societal and economic divides are becoming more volatile, disinformation and misinformation is amplifying this to a point where trust is no longer a given.
Trust in media, government and organisations is at risk due to the spread of disinformation and misinformation. There is rising public concern about distinguishing real and fake news online, whether that be on social media or online publications. Not only this, but deepfakes are getting easier and cheaper to make and more convincing. This is being driven by new technologies which are available to the masses such as Sora 2 or Midjourney.
The Fastest-Rising Threat: Adverse Outcomes of AI
AI has exploded onto the world stage in the last five years. While there have been numerous benefits and positive uses of AI, but like all new technology it has inherent risks. As stated in the report, AI has shifted from a frontier technology to a global force which is shaping organisations, economies, governments and society. The graph below highlights the market size of AI over the next seven years.
Global AI Market Size

Source: Global Risks Report 2026
The most astounding prediction in the Global Risks Report is that the risk, adverse outcomes of AI, has the sharpest rise in the report between the short-term and long-term. Going from #30 in the two-year outlook to #5 in the ten-year outlook. When you consider the sophistication of generative AI and what it can produce, where will we be in ten years with this technology? It’s likely that it’ll be impossible for society to distinguish fact from fiction.
What becomes of society when the truth can be manipulated, and real evidence can be dismissed as fake? The consequences are going to be severe, with trials, scandals and disputes over the authenticity of evidence and the legitimacy of its source. This is going to create a scenario where deepfakes are shown to be real, and real evidence is dismissed as deepfake. This is known as the “liar’s dividend”, which highlights how the truth can be blurred by suggesting evidence is generated using AI.
Deepfakes and the Collapse of Visual Proof
There are multiple entry points for manipulation for generative AI to make its mark on visual evidence, whether that be before, during, or after collection.
Before video evidence is even collected and reviewed, manipulation has potentially occurred. Video material is AI generated and seeded online, leading to reposts and forwarding which can easily strip the video of any context.
During collection, video evidence can be used to impersonate others using digital injection attacks, known as deepfake identity fraud. We’ve discussed digital injection attacks a great deal, and we’ve seen the impact of this as a company lost $25m to a deepfake which impersonated their CFO.
After video evidence collection, real media can be spliced or edited together using generative AI to change context, remove key moments or add elements that are entirely fabricated.
From “Looks Real” to “Can Be Proven”
If there are vulnerable entry points for manipulation before, during and after video evidence is watched and collected, how do you know video material is real? The differentiator for media going forward is now in evidential integrity and provenance. There now must be a defensible origin of all video recordings and digital evidence, proof of handling, and change history.
What does this mean in practice? For those that need to capture video or digital evidence and be able to prove its legitimacy, here are a few key methods to ensuring provenance:
- Capture at the source, if possible, instead of using 2nd or 3rd hand material
- Use solutions with tamper-evident packaging (including metadata) at the point of capture
- An immutable audit trail which details who has accessed, exported, shared or edited media
- Version control with a clear trail back to the original if changes are made
- Ability to verify the integrity of video evidence at any point, independently
Secure Interviewing and Digital Evidence in an Age of Synthetic Media
How do you ensure the digital integrity of video meetings and interviews hosted online? Secure interviewing with platforms, such as MeaConnexus, offer capabilities that reduce disputes over authenticity by using Blockchain technology to retain a tamper-evident seal around the interview.
Secure interviewing offers controlled access and authenticated participants, so you know you’re speaking with a real person. These platforms offer a “video interview package” which includes the full session recording, metadata and an audit trail which details what has changed, who has changed it and when it was changed. It’s essential to be demonstrate a clear chain of custody for video evidence to be used in sensitive environments, and where provenance is required.
Not only secure interviewing, but all digital evidence should be protected in this age of synthetic media using tools such as MeaFuse. To make provenance provable, a centralised evidence platform which demonstrates a clear audit trail with tamper evident integrity checks is crucial.
The Question That Now Defines Trust
As demonstrated in the Global Risks Report 2026, in this era of deepfakes, credibility comes less from what a video clip shows, but more from whether its provenance can stand up to scrutiny. Misinformation and disinformation has been, and continues to be, a top global risk, but adverse outcomes of AI is the real rising threat. We can expect AI to exacerbate the effects of disinformation, making “proof of authenticity” an essential part of our digital experience.
If any piece of digital evidence can be plausibly denied as synthetic, or plausibly fabricated to look real, what will your organisation rely on to prove authenticity when it matters most?
About Mea Digital Evidence Integrity
The Mea Digital Evidence Integrity suite of products has been developed by UK based consultancy, Issured Ltd. Benefitting from years of experience working in defence and security, Issured recognised the growing threat from digital disinformation and developed the Mea Digital Evidence Integrity Suite of products to ensure digital media can be trusted.
MeaConnexus is a secure investigative interview platform designed to protect the evidential integrity of the interview content. With features designed to support and improve effective investigations, MeaConnexus can be used anytime, anywhere and on any device, with no need to download any software.
MeaFuse has been designed to protect the authenticity and integrity of any digital media from the point of capture or creation anywhere in the world. Available on iOS, Android, Windows and MacOS MeaFuse digitally transforms the traditional chain of custody to ensure information is evidential.
Disclaimer and Copyright
The information in this article has been created using multiple sources of information. This includes our own knowledge and expertise, external reports, news articles and websites.
We have not independently verified the sources in this article, and Issured Limited assume no responsibility for the accuracy of the sources.
This article is created for information and insight, not intended to be used or cited for advice.
All material produced in the article is copyrighted by Issured Limited.
Interested in Hearing More?
To receive regular updates and insights from us, follow our social media accounts on LinkedIn for Mea Digital Evidence Integrity and Issured Limited.
Additionally, sign-up to our Mea Newsletter to receive product updates, industry insights and event information directly to your mailbox. Sign up here.
View our other articles and insights here.
