With the release of Sora 2 in late 2025, AI-generated material now convincingly imitates faces and voices on a free app available to the masses. While videos are circulating which are evidently created for the purpose of humour, the material Sora produces is highly-realistic. Sora combines both audio and visuals, and in some cases has been used to imitate body-worn camera or interviews, undermining visual and auditory evidence.
If this amount of progress has occurred in the space of a year (Sora was initially released in late 2024), this problem should be addressed today. We can expect this to impact fraud, creating new and improved fraudulent scams which utilise AI-generated material. It will also impact evidentiary integrity, especially across regulated industries such as law enforcement and justice, insurance, finance and more. Anyone with a mobile device can create deepfake body-worn camera or CCTV footage to make it appear they’re somewhere they never were.
In this article we are going to map the threats of deepfakes, show where identity verification currently fails, and describe the emerging techniques to detect and prevent identity fraud.
How Deepfakes Have Changed
Deepfakes are synthetic video, audio and imagery produced by generative AI models. These can use a person’s likeness, to then place them into material, or generate entirely fictitious material.
Deepfakes are evolving, and rapidly at that. From the early days of Tom Cruise deepfakes to today, we are now seeing a level of realism which is comparable with actual digital material. Not only is it realism, but it’s the scalability at which deepfakes can be created, so instead of using specialised software or requiring complex hardware, people are creating deepfakes on their mobile devices in a matter of seconds. Finally, the accessibility of the software to produce deepfakes is now easier than ever to obtain and use. By having these tools freely available to the public, we can expect more deepfake material to circulate, whether that’s on social media or in regulated settings.
When it comes to deepfake identity fraud, the attack chain follows a general pattern. Firstly, the deepfakes are generated, using easily accessible tools. Users can import material for the generative AI model to base the deepfake material on. Next comes the delivery of the deepfake, typically via injection or presentation when verifying identity. Finally, the fraudsters can then scale this up using automation to commit brute force attacks against verification systems.
Where Identity Verification Fails Today
The way we verify identity needs to continually evolve as new technology and cyber threats roll out. We are now seeing legacy verification signals fail to keep accounts and users secure. These can include signals such as passwords, security questions, or in the case of deepfake identity fraud static selfies or agent checks. These checks can now be fooled by advancing deepfake technology.
To bypass the identity verification process, fraudsters have multiple points in the journey which can be exploited. This could include remote onboarding or Know Your Customer (KYC), where the fraudsters can inject the deepfake material. They could also use social engineering to commit phishing attacks to get users to reveal personal information. This could be done with free and easily available deepfake technology to make it appear the criminal is someone they’re not.
How To Prevent Deepfake Fraud to Identity Verification
Not long ago, it was assumed that participants in remote meetings and interviews were who they said they were. But as deepfakes continue to evolve, it has become a necessity to use a secure platform, especially for sensitive interviews. Proving the content provenance and the authenticity is now essential. Secure solution MeaConnexus is built with Blockchain technology and backed with advanced cryptography to provide digital authenticity for all interviews and meetings hosted on the platform.
To prevent video injection attacks on a remote call, there are specialised tools designed to offer anti-injection capabilities alongside liveness checks. You can prove the identity of the other participants using biometric identity verification and liveness detection.
Designing Identity Verification for Evidentiary Integrity
For regulated industries that require evidentiary integrity and proof of provenance across all materials, including meetings and interviews, secure identity verification is essential.
From capture of digital material to storage to analysis, this process should remain completely secure. This could include cryptographic timestamps which are written into an audit log and a tamper-evident trail of all changes made to material, including who, when and what the change was.
Beyond the chain of custody, the material should also be reviewable. Bundling all material from an interview or meeting can support internal audits, inquiries, or court proceedings. These bundles can include transcripts, files, documents and the interview recording.
Sector-Specific Considerations for Identity Verification
While the risk of identity fraud can occur across any business or organisation, there are some key considerations certain industries should acknowledge.
Law Enforcement – when conducting interviews to gather statements or depositions, it is important that the law enforcement officers and investigators can verify the identity of the witness or victim.
Legal – Law firms need online identity verification to confirm clients, witnesses, and signatories are who they claim to be. It prevents fraud, safeguards confidential information, ensures compliance with KYC and Anti-Money Laundering regulations, and protects the integrity of digital transactions and legal documentation.
Insurance Claims – fraud is a major issue in insurance claims. At the rate that deepfakes are progressing, identity fraud will become harder to detect simply based on agent checks. Insurers therefore should verify claimants to ensure they’re speaking with the policy holder.
Financial Services – a major regulation in finance is Know Your Customer (KYC). This stipulates that financial institutions must verify they are speaking with the account holder, especially if major changes are happening such as a large payment.
HR and Notarial Services – whether conducting first round interviews for a new job position or document witnessing, it is vital to verify the identity of the participants of the video meeting. Having these meetings recorded in an audit-ready package also provides digital integrity.
When Seeing is No Longer Believing
Deepfakes have fundamentally changed how we interact online and verify ourselves. Identity can no longer rely on solely static proofs, it must be verified continuously and utilise cryptography, provenance and liveness.
When seeing is no longer believing, the organisations that thrive will be those that can prove, with evidence, who is on the other side.
About Mea Digital Evidence Integrity
The Mea Digital Evidence Integrity suite of products has been developed by UK based consultancy, Issured Ltd. Benefitting from years of experience working in defence and security, Issured recognised the growing threat from digital disinformation and developed the Mea Digital Evidence Integrity Suite of products to ensure digital media can be trusted.
MeaConnexus is a secure investigative interview platform designed to protect the evidential integrity of the interview content. With features designed to support and improve effective investigations, MeaConnexus can be used anytime, anywhere and on any device, with no need to download any software.
MeaFuse has been designed to protect the authenticity and integrity of any digital media from the point of capture or creation anywhere in the world. Available on iOS, Android, Windows and MacOS MeaFuse digitally transforms the traditional chain of custody to ensure information is evidential.
Disclaimer and Copyright
The information in this article has been created using multiple sources of information. This includes our own knowledge and expertise, external reports, news articles and websites.
We have not independently verified the sources in this article, and Issured Limited assume no responsibility for the accuracy of the sources.
This article is created for information and insight, not intended to be used or cited for advice.
All material produced in the article is copyrighted by Issured Limited.
Interested in Hearing More?
To receive regular updates and insights from us, follow our social media accounts on LinkedIn for Mea Digital Evidence Integrity and Issured Limited.
Additionally, sign-up to our Mea Newsletter to receive product updates, industry insights and event information directly to your mailbox. Sign up here.
View our other articles and insights here.
