When you see a video clip shared with certainty, do you assume it is trustworthy because it looks real?
Deepfakes (the AI-generated videos and voices designed to mimic people) receive most public attention. Yet many of today’s most effective false narratives rely on simpler methods. Shallowfakes are manipulated media created with basic editing and selective framing. They are the “naughty child” of deception: less sophisticated than deepfakes, far easier to produce, and often more dangerous because they hide inside ordinary content. Silent manipulation can travel faster than anyone can correct it. And it often does so without leaving obvious traces.
In this article we will be looking at what shallowfakes are, their real-world impact and how to reduce their risk.
What are shallowfakes?
Shallowfakes do not usually use generative AI techniques. Instead, they alter existing media using accessible tools such as photo editing, audio editing, or video editing software. These methods require little expertise, so shallowfakes can be created quickly and at scale. AI is also being used too, we’re now seeing AI be utilised in editing software to make changes to media more convincing and harder to detect.
Shallowfakes appear familiar: a crop, a caption, a speed change, a splice, a minor document edit. The consequences are serious when altered media is used to discredit an individual, distort events, or manufacture “evidence” in a dispute.
How shallowfakes work
Most shallowfakes fall into three forms.
Photoshopped content alters the material itself. A creator may remove elements, add new ones, change text, or crop strategically to hide context. The barrier for entry in using these tools has dramatically reduced due to AI. In-built AI capabilities of editing tools can create ever more convincing media.
Recontextualised content does not change the pixels, but changes the meaning. A genuine image or clip is paired with a misleading caption, date, location, or claim.
Audio and video edits change what was said or how it appears. Clips can be spliced, sections removed, audio layered, or speed adjusted. Even subtle edits can create the impression that someone is behaving a certain way or saying things they never did.
Why shallowfakes matter in a 2026 risk environment
Disinformation is no longer an unimportant issue. The World Economic Forum’s Global Risks Report 2026 ranks “misinformation and disinformation” as the second most severe risk over the next two years.
Shallowfakes sit at the centre of this risk. They are a vessel for misinformation (inaccurate content shared without intent to deceive) and disinformation (content deliberately designed to mislead). Shallowfakes are cheaper than deepfakes, faster to produce, and often more believable because they start with real material. A shallowfake does not need to invent a person or a scene. It only needs to bend reality enough to create doubt, outrage, or certainty in the wrong direction.
They also exploit modern ways that we communicate such as social media: speed, overload, and low attention. Corrections rarely spread as far or as fast as an initial, emotionally charged post. When shallowfakes become normal, a second problem emerges: genuine evidence can be dismissed as fake, and bad actors can claim “it was edited” to avoid accountability.
Why shallowfake detection is difficult
Shallowfake detection is often less about spotting technical artefacts and more about validating provenance and context. A recontextualised video may be technically authentic yet still deceptive. A cropped image can be genuine and misleading at the same time if a key part of the image is removed.
This makes it difficult for investigators, legal teams, insurers, justice and anyone that verifies media as they may be forced to assess content quickly, with limited information, under reputational pressure. In that environment, shallowfakes thrive.
Practical steps to reduce and prevent shallowfakes
A realistic response cannot rely only on detection tools, because many shallowfakes are context based. It must combine verification processes and prevention.
For individuals:
- Pause before sharing. Strong emotional reactions are a cue to verify.
- Look for the original source and full version, not only reposts.
- Check context by comparing multiple reputable reports and prior versions of the content.
- Watch for edit patterns such as abrupt cuts, inconsistent timing, or mismatched audio.
For organisations:
- Establish a verification workflow. Define who assesses media, what checks they perform (source tracing, cross-referencing, reverse search and forensic review where appropriate), and what evidence threshold is required before action is taken.
- Treat digital media as evidence, not as an attachment. Preserve originals, document provenance, and maintain version control. Screenshots and forwarded clips are weak forms of evidence because they strip context and metadata.
- Strengthen prevention for high-stakes content. Where operations depend on digital media (claims, investigations, interviews, compliance), capture and store content in ways that create an auditable record from the point of creation using tamper evident tools like MeaConnexus or MeaFuse.
When the smallest edit makes the biggest difference
Shallowfakes are not a future threat waiting to arrive. They are already embedded in how misinformation spreads, how reputations are attacked, and how fraudulent claims are supported. In a world where misinformation and disinformation are ranked among the most severe near-term risks, the quiet manipulations deserve the same attention as the dramatic ones.
If the smallest edit can change what people believe, what will you rely on to prove what is real when it matters most?
About Mea Digital Evidence Integrity
The Mea Digital Evidence Integrity suite of products has been developed by UK based consultancy, Issured Ltd. Benefitting from years of experience working in defence and security, Issured recognised the growing threat from digital disinformation and developed the Mea Digital Evidence Integrity Suite of products to ensure digital media can be trusted.
MeaConnexus is a secure investigative interview platform designed to protect the evidential integrity of the interview content. With features designed to support and improve effective investigations, MeaConnexus can be used anytime, anywhere and on any device, with no need to download any software.
MeaFuse has been designed to protect the authenticity and integrity of any digital media from the point of capture or creation anywhere in the world. Available on iOS, Android, Windows and MacOS MeaFuse digitally transforms the traditional chain of custody to ensure information is evidential.
Disclaimer and Copyright
The information in this article has been created using multiple sources of information. This includes our own knowledge and expertise, external reports, news articles and websites.
We have not independently verified the sources in this article, and Issured Limited assume no responsibility for the accuracy of the sources.
This article is created for information and insight, not intended to be used or cited for advice.
All material produced in the article is copyrighted by Issured Limited.
Interested in Hearing More?
To receive regular updates and insights from us, follow our social media accounts on LinkedIn for Mea Digital Evidence Integrity and Issured Limited.
Additionally, sign-up to our Mea Newsletter to receive product updates, industry insights and event information directly to your mailbox. Sign up here.
View our other articles and insights here
group