Skip to main content

With the ever increasing use and capability of Artificial Intelligence, the phenomenon of ‘deepfakes’ has become an increasingly prominent and, at times, controversial topic. As we begin to look ahead to 2025, the world continues to find itself grappling with the complex implications of deepfake technology and the manipulation of data and disinformation. This article aims to shed light on what deepfakes are, chart their growth, explore the impact they are having on society and ways to mitigate the threats they pose.

What are Deepfakes?

Deepfakes are synthetic media, often in the form of videos, audio or images, generated through artificial intelligence (AI) and deep learning algorithms. These algorithms use vast datasets to change or replace existing content, seamlessly overlaying one person’s likeness onto another. A similar, but perhaps not as widely recognised, form of deceptive media are shallowfakes, where instead of using AI, media is altered with simple editing tools. This process has reached a level of sophistication where it can be challenging to differentiate between genuine and fake content, causing disinformation.

Deepfakes and AI

The Growth of Deepfakes

Over the past few years, the growth of deepfake material has been exponential. The Ofido’s 2024 Identity Fraud Report found that there has been a 3000% increase in deepfakes online, and a 500% increase in digitally forged identities. This vast increase, they say, is down to the increasing usage of AI and the availability of deepfake applications.

Deepfake Perception

A recent study by University College London, involving 529 participants, found that 27% could not differentiate between real and deepfake audio recordings. The authors of the study stated, “As speech synthesis algorithms improve and become more realistic, we can expect the detection task to become harder.”

Audio wave

Impacts on Society

The rise of deepfakes poses significant challenges to various parts of society, from misinformation to business. Here are some key ways in which deepfakes are already impacting our lives:

  1. Misinformation and Fake News

The rapid spread of deepfakes online worsens the already prevalent issue of misinformation. A study conducted by the University of Baltimore and Cybersecurity firm CHEQ found that fake news cost the global economy $78 billion annually.

  1. Impact on Justice

As we’ve previously written about digital false evidence, deepfakes have a profound impact on the justice system and their ability to manipulate evidence. In the article, we discuss the most notable use of false evidence in court: the use of deepfake audio in a child custody battle to discredit the father.

  1. Privacy Concerns

Deepfakes can be used for harmful purposes, such as creating fake content featuring individuals without their consent. A notable example of this involves the Actor’s strike in 2023 where actors protested the use of AI and deepfakes to use their likeness without their consent. This raises serious privacy concerns and has led to calls for more laws and legislation to protect individuals from deepfake media.

  1. Impact on Business

The corporate world is not exempt to the influence of deepfakes. Just this year we have seen the largest deepfake scam, costing a Hong Kong firm $25 million due to a deepfake video conference call which convinced an unsuspecting employee to transfer funds to fraudsters. Businesses continue to face new challenges in protecting their reputation and maintaining trust.

Mitigating the Impact of Deepfakes

Addressing the challenges posed by deepfakes requires a major approach involving technological advancements, legislation, and public awareness. Some potential strategies include:

Prevention and Detection Software

Invest in developing and deploying more advanced deepfake detection and prevention technologies to identify and filter out artificial content on various platforms like social media or news sites.

Education and Awareness

Educate the public about the existence of deepfakes and the potential risks associated with them. This could involve teaching the public about tools you can use to prevent and detect deepfakes, or making sure they use official media channels. Within businesses, employees can take awareness and training courses on the impacts of deepfakes to mitigate damaging personal and professional reputations.

Increased awareness can contribute to a more perceptive audience less prone to manipulation.

Regulation and Legislation

Governments and regulatory bodies should actively work on making, updating and implementing legislation that addresses the spread of deepfakes, creating clear consequences for those who misuse this technology. One example of this involves the UK Online Safety Act which was updated to address better protection of victims of deepfake abuse. The United States Department of Treasury released a report discussing cybersecurity in finance, and concluded that defence and detection tools need to be employed to prevent disinformation.

To do this globally, collaboration is needed at the international level to address the universal nature of deepfake threats. Sharing information, best practices, and technological advancements among countries can create strong protection against the misuse of deepfakes.

Deepfake regulations

Transparency in Content Creation

Social Media platforms should focus on transparency in content creation, making it easier for users to verify the authenticity of the media they encounter. This may involve watermarking, other forms of certification, or using third party validation tools for media integrity.

The Future of Deepfakes

In the foreseeable future, deepfakes will continue to evolve and become more sophisticated, raising concerns over their societal influence. As AI algorithms advance, the line between authentic and manipulated content blurs, with implications for people, privacy, and trust.

As deepfakes become more convincing, there is a growing risk of loss of trust in digital content. People may become sceptical of the authenticity of any video or image, leading to a general atmosphere of doubt. This can have major implications in high stake industries like law enforcement and justice where evidential integrity is paramount.

Gartner predicts that by 2026, 30% of organisations will find biometric authentication unreliable due to the advancement of deepfakes. To counter this, the World Economic Forum released an article on ways to future-proof against deepfakes. This included something called ‘zero-trust.’ This means that you won’t trust anything by default, calling for constant verification and scepticism from users.

The growth of deepfake technology, evident in the surge of shallowfake and deepfake content, highlights the urgency for sophisticated software and legislation. Striking a balance between innovation and safeguarding against harmful misuse will be important, requiring a collaborative effort from technology developers, lawmakers, and the public to scrutinise synthetic media. By doing so, we can collectively work towards mitigating the negative impacts of deepfakes and preserving the integrity of our digital world.

About Mea Digital Evidence Integrity 

The Mea Digital Evidence Integrity suite of products has been developed by UK based consultancy, Issured Ltd. Benefitting from years of experience working in defence and security, Issured recognised the growing threat from digital disinformation and developed the Mea Digital Evidence Integrity Suite of products to ensure digital media can be trusted.
MeaConnexus is a secure investigative interview platform designed to protect the evidential integrity of the interview content. With features designed to support and improve effective investigations, MeaConnexus can be used anytime, anywhere and on any device, with no need to download any software.
MeaFuse has been designed to protect the authenticity and integrity of any digital media from the point of capture or creation anywhere in the world. Available on iOS, Android, Windows and MacOS MeaFuse digitally transforms the traditional chain of custody to ensure information is evidential.

Disclaimer and Copyright 

The information in this article has been created using multiple sources of information. This includes our own knowledge and expertise, external reports, news articles and websites.
We have not independently verified the sources in this article, and Issured Limited assume no responsibility for the accuracy of the sources.
This article is created for information and insight, not intended to be used or cited for advice.
All material produced in the article is copyrighted by Issured Limited.

Interested in Hearing More? 

To receive regular updates and insights from us, follow our social media accounts on LinkedIn for Mea Digital Evidence Integrity and Issured Limited.
Additionally, sign-up to our Mea Newsletter to receive product updates, industry insights and event information directly to your mailbox. Sign up here.
View our other articles and insights here.