Skip to main content

In the day and age of groundbreaking technology like generative AI, it’s easy to forget the technology we take for granted. One such technology is biometric identity verification systems.

Biometric identity verification systems use unique physical traits, such as fingerprints or facial recognition, to confirm a person’s identity. These systems compare the captured biometric data against stored records to verify authenticity.

Using biometrics to access devices, applications, or authenticators to prove it is you has become a standard to keep accounts secure and users firmly in control of their data.

We’re now seeing what began as an internet joke evolve into a powerful tool for criminal exploitation: deepfakes. Synthetic media can now be created with ease to deceive biometric verification systems. This has played out in reality, impacting an Indonesian financial institution which was the victim of 1,100 deepfake attacks to bypass their loan application service.

In this article we look at how deepfakes exploit identity verification systems, which systems are at risk and implications for trust and security.

Indonesia’s Financial Fraud Case

Indonesia’s Financial Fraud Case

It’s worth highlighting the previously mentioned fraud case, looking at what was targeted, how it unfolded and the consequences so far.

More than 1,000 fraudulent accounts were detected, including 45 unique mobile devices which were used to carry out the attack. According to investigators, “The attackers obtained the victim’s ID through various illicit channels”. The criminals used, “malware, social media, and the dark web, [to] manipulate the image on the ID—altering features like clothing and hairstyle—and used the falsified photo to bypass the institution’s biometric verification systems.”

According to the fraud investigators, this had a devastating societal impact in Indonesia, and a financial impact estimated at $138.5 million.

This fraud is now becoming a serious threat globally. What was once seen as secure, biometric authentication systems are vulnerable to deepfake replicas of victims and counterfeit accounts.

How Deepfakes Exploit Biometric Identity Systems

There are several biometric verification processes available to users before accessing a device or application. The most common ones are face ID, voice biometrics and video calls.

While these offer multi factor authentication, deepfakes created today can now bypass these, as shown in the Indonesian financial fraud case.

An emerging threat is digital injection attacks. This is when a criminal injects synthetic or deepfake media into a system to gain access, deceiving the identity verification process. There are multiple points where a criminal could inject the media into the data stream, from the user’s device to the server.

These attackers can take biometric information from social media, cyber-attacks or the dark web. They will then create synthetic audio and video of the user to then gain access to a system. With the use of AI, attackers can create artificial “liveness” of the user to deceive verification and potentially other humans.

Systems at Risk from Deepfake Identity Fraud

Certain systems might be more vulnerable to deepfake identity fraud, here a few examples of these systems and why they might be susceptible.

  • Interviews – remote recruitment has exploded since 2020, and with that, a potential vulnerability to deepfake interview fraud.
  • Passport and visa applications – by conducting these applications online, it has left certain systems susceptible to fraud, allowing attackers to steal identities.
  • Financial account creation – fraudsters can create accounts using deepfake identities of people gathered from social media, cyber-attacks or the dark web.

Systems at RiskAlthough not a single system, there are cross-border implications of deepfake identity fraud. Whether it’s fraudsters applying for benefits from another country or Organised Crime Groups (OCGs) utilising deepfakes to gain access to foreign systems, there are many ways attackers will use deepfake identities to enter vulnerable systems.

Implications for Trust and Security

What does this mean for user security, or the trust they have in systems containing sensitive information?

This has potentially severe implications for mass identity theft and synthetic identity creation, with attackers taking control of visa and passport applications.

From an organisational perspective, there will likely be damage to brand reputation if user data is compromised or taken advantage of. This could lead to financial penalties if there was gross misconduct in safeguarding data and verification processes.

For law enforcement, these types of crimes are harder to trace, as attackers can be located anywhere with access to internet to carry out attacks, not to mention attackers could use VPNs or other masking methods to make it harder to trace.

What to Watch in 2025 and Beyond

Deepfake identity fraud can only get worse, especially seeing the rate at which generative AI is growing both in popularity and its complexity. This will make detection even harder for verification systems, therefore more needs to be done to prevent and detect deepfake identities.

This is leading to an “arms race” between criminals and security/cybersecurity experts and vendors. As fraudsters develop new techniques to deceive biometric authentication, it is up to security vendors to counter these threats.

On the back of this, we’re likely to see new legislation or regulations around synthetic media and its creation, and/or possibly ID verification. Will lawmakers be able to prevent the spread of deepfake identity fraud with legislation, or will the way we verify ourselves have to ultimately change?

Finally, an emerging threat from the dark web is Deepfake-as-a-Service. This is where bad actors can purchase deepfakes, to then be used for identity fraud or potentially worse scenarios.

Conclusion

It is important that we first recognise the threat of deepfake identity fraud and find ways to counter its impact as well as future proof systems, so they don’t become vulnerable.

It is necessary for every organisation to consider their verification processes and take the necessary precautions to ensure the safety of all data they possess.

Is your organisation safe from deepfake identity fraud?

About Mea Digital Evidence Integrity 

The Mea Digital Evidence Integrity suite of products has been developed by UK based consultancy, Issured Ltd. Benefitting from years of experience working in defence and security, Issured recognised the growing threat from digital disinformation and developed the Mea Digital Evidence Integrity Suite of products to ensure digital media can be trusted.
MeaConnexus is a secure investigative interview platform designed to protect the evidential integrity of the interview content. With features designed to support and improve effective investigations, MeaConnexus can be used anytime, anywhere and on any device, with no need to download any software.
MeaFuse has been designed to protect the authenticity and integrity of any digital media from the point of capture or creation anywhere in the world. Available on iOS, Android, Windows and MacOS MeaFuse digitally transforms the traditional chain of custody to ensure information is evidential.

Disclaimer and Copyright 

The information in this article has been created using multiple sources of information. This includes our own knowledge and expertise, external reports, news articles and websites.
We have not independently verified the sources in this article, and Issured Limited assume no responsibility for the accuracy of the sources.
This article is created for information and insight, not intended to be used or cited for advice.
All material produced in the article is copyrighted by Issured Limited.

Interested in Hearing More? 

To receive regular updates and insights from us, follow our social media accounts on LinkedIn for Mea Digital Evidence Integrity and Issured Limited.
Additionally, sign-up to our Mea Newsletter to receive product updates, industry insights and event information directly to your mailbox. Sign up here.
View our other articles and insights here.