Skip to main content

AI has become an inescapable topic in recent years. While it offers many positive applications, it is also being used for harmful purposes. Deepfakes are synthetic media generated using AI and are increasingly used by criminals to commit criminal acts. In previous years it was easy to distinguish deepfakes and real media, but within the past two years, deepfakes have become indistinguishable from real footage.

This issue is made more prevalent as there is increasing accessibility and sophistication of deepfake creation tools. As highlighted in Police1, ‘Unlike conventional photo or video editing, which often leaves detectable traces, advanced deepfakes leverage sophisticated machine-learning algorithms to create convincingly authentic-appearing content that can be extremely difficult to distinguish from genuine recordings.’

This problem has real-world implications for the likes of law enforcement, insurance and any sector that deals with securing digital evidence/assets that requires verifiable provenance. In the article we delve into the evolution of deepfakes, its impact on investigations and how to prepare for a future of countering deepfakes.

Evolution of Deepfakes

From entertainment and parody, deepfakes soon became a tool of malicious intent. Deepfakes have been used to spread disinformation, for impersonation and to commit fraud.

As noted in Police1, ‘The technological trajectory suggests deepfake creation will soon become increasingly accessible, requiring minimal technical expertise while producing increasingly convincing results. As authors led by Indian cybersecurity expert Mohammad Wazid noted, “The gap between deepfake creation and detection capabilities continues to widen.”’

But not only are deepfakes a concern for establishing legitimate digital evidence, shallowfakes have been wreaking havoc for even longer than deepfakes. Shallowfakes are manipulated content using editing tools, without the use of generative AI.

For example, doctoring images, editing video clips together or changing dates on a document. This involves changing existing documents or media, whereas a deepfake is newly generated content.

Deepfake Use Cases in Criminal Contexts

There are many ways deepfakes can be used in the context of criminal investigations. The two most common reasons are:

  1. Using deepfakes as an alibi or planted evidence
  2. Using deepfakes to commit a crime such as fraud or blackmail

Police1 suggested a possible scenario that could unfold, ‘Imagine investigating a gang-related shooting where all evidence points to a specific suspect, only for the defense to produce convincing, yet fake, gas station surveillance footage showing the suspect filling up their car at a location 50 miles away during the exact time of the murder.’

Deepfakes could also be used to plant evidence on a suspect or victim, completely altering the outcome of a court trial, as the deepfake video is so convincing.

As mentioned earlier, deepfakes are being used to commit fraud, such as deepfake identity fraud. As well as this, there are digital injection attacks where fraudsters are accessing video or audio calls and changing their appearance to present themselves as another person.

This level of crime and the uncertainty of how sophisticated deepfakes are becoming has led to government crackdowns on deepfakes. New legislation and regulation will certainly help deter criminals from using deepfakes, but there will still be an arms race between deepfake users and deepfake detection software.

Deepfakes’ Impact on Investigations and Digital Evidence Integrity

There are many ways that deepfakes are impacting investigations, requiring more resources to verify evidence and more tools to help counter deepfakes.

Deepfakes undermine trust in genuine video, audio and photo evidence. This is leading to what is known as a “liar’s dividend”. This is where suspects could say that evidence presented is doctored, and they’ve never said or done the things suggested in the evidence.

There will likely be an increase in resource burden to verify digital evidence, both time and cost. As stated in a UK Government report, ‘Another problem faced by investigators is the high and increasing volume of data stored on devices. Processing data can take a long time, and increases in data storage and in the number of devices associated with crimes have led to increased pressure.’ Verification software or Blockchain-enabled tools will come at a cost, but will lead to higher success rate of verifying synthetic media. Solutions such as MeaConnexus or MeaFuse can reduce deepfakes from the outset because as soon as the interview or evidence is captured, it is then committed to the Blockchain and is therefore tamper-evident, so you’ll know if someone has tampered with it.

Deepfakes threaten the integrity of common digital evidence capture such as Body-Worn Cameras (BWC), CCTV and social media/webpage screenshots. All of these are at a risk of tampering and creating false narratives with deepfakes. It is essential for investigators to prove the chain of custody or the provenance of digital evidence in ways that might not have been required in the past.

Preparing for the Future to Counter Deepfakes

While deepfakes are a current threat, at the rate that generative AI is evolving, it will soon be near-impossible for a human to distinguish real from deepfake media.

Investigators need to undergo training to detect noticeable variations in real and synthetic media, but more importantly training in new tools and software.

Alongside the training, there will have to be reform in the evidential process to ensure there is little delay in verification and that digital evidence has a verifiable source. This may require a change of culture to accommodate the move towards digital processes and tools, but it’s important to note that digital crimes or even crimes involving digital evidence are growing, and so it’s important to be proactive. In the long run, agencies and organisations will save time and money by adopting new technology that counters threats like deepfakes.

Can We Still Trust What We See and Hear?

Deepfakes have become a central topic in discussions about AI and its potential harm to society. We can expect the problem to grow, as deepfake technology becomes more accessible and their quality becomes more sophisticated. By investing in solutions built to detect or even prevent deepfakes, you will be saving your department time and costs in the long run.

We would like to leave you with this question: In a world where seeing is no longer believing, how will your organisation or agency prove what’s real?

About Mea Digital Evidence Integrity 

The Mea Digital Evidence Integrity suite of products has been developed by UK based consultancy, Issured Ltd. Benefitting from years of experience working in defence and security, Issured recognised the growing threat from digital disinformation and developed the Mea Digital Evidence Integrity Suite of products to ensure digital media can be trusted.
MeaConnexus is a secure investigative interview platform designed to protect the evidential integrity of the interview content. With features designed to support and improve effective investigations, MeaConnexus can be used anytime, anywhere and on any device, with no need to download any software.
MeaFuse has been designed to protect the authenticity and integrity of any digital media from the point of capture or creation anywhere in the world. Available on iOS, Android, Windows and MacOS MeaFuse digitally transforms the traditional chain of custody to ensure information is evidential.

Disclaimer and Copyright 

The information in this article has been created using multiple sources of information. This includes our own knowledge and expertise, external reports, news articles and websites.
We have not independently verified the sources in this article, and Issured Limited assume no responsibility for the accuracy of the sources.
This article is created for information and insight, not intended to be used or cited for advice.
All material produced in the article is copyrighted by Issured Limited.

Interested in Hearing More? 

To receive regular updates and insights from us, follow our social media accounts on LinkedIn for Mea Digital Evidence Integrity and Issured Limited.
Additionally, sign-up to our Mea Newsletter to receive product updates, industry insights and event information directly to your mailbox. Sign up here.
View our other articles and insights here.