Skip to main content

If Truth be Told: AI and its Distortion of Reality

As reported in a recent Washington Post article (AI is destabilizing ‘the concept of truth itself’ in 2024 election) we have already seen instances this year where people are dismissing media accusations by suggesting it’s fabricated. This response may not have been possible a few years ago, but now, with the power of deepfake and generative AI technology, it provides a person plausible deniability to any allegation. This has placed us in a grey area, as claims can now be refuted with no definitive truth. This article aims to discuss how doctored media can distort reality, AI and plausible deniability, the impact on investigations and chain of custody and finally the methods to combat misinformation.

Distorting Reality and Eroding Trust

One of the most significant impacts of fabricated media is their ability to distort reality and manipulate public perception. Individuals with malicious intent can use this AI technology to create media that appear genuine, featuring fabricated scenarios, statements, or actions. As a result, public figures, politicians, and even everyday individuals may find themselves at the mercy of false narratives.

Just one of the many synthetic media examples involves Keir Starmer (Labour Party leader) released on the first day of the Labour Party conference in October 2023, with fabricated audio of Starmer exploiting members of his staff.

Most recently, President Biden has been the target of AI-generated robocalls telling voters not to vote in a primary election, the first case of voter suppression in the lead-up to the 2024 presidential election. This raises serious privacy concerns and has led to calls for more laws and legislation to protect individuals from deepfake media.

The consequences of such manipulation are extreme, as trust in information sources is eroded. With the spread of fake media, discerning truth from fiction becomes an increasingly challenging task. This erosion of trust not only affects individuals but also has broader implications, including a potential decline in public confidence in institutions and authorities.

Plausible Deniability: A Dangerous Consequence

Fabricated media has introduced a new dimension to plausible deniability, allowing people to distance themselves from their actions by claiming the content is fabricated. This trend poses a severe threat to accountability and the concept of truth, as individuals can exploit the uncertainty surrounding the authenticity of content to avoid consequences.

One example already this year features Donald Trump dismissing an advert as AI generated. Presented on Fox News, the video shows Trump unable to pronounce words like ‘anonymous’, which he later denounced as fabricated media.

As discussed in the Washington Post article, AI creates what is known as a “liar’s dividend”. This is a scenario where misinformation is spread so often that the truth becomes unclear.

In legal contexts, the use of deepfakes complicates matters for law enforcement and legal professionals. The plausible deniability of manipulated media challenges the traditional reliance on visual evidence, introducing a level of scepticism that can obstruct the pursuit of justice and chain of custody.

Impact on Law Enforcement Investigations

Law enforcement agencies are grappling with the challenges posed by the rise of false media in criminal investigations. The FBI Cyber Division has stated that synthetic media was predicted to be used by criminals for ‘spear phishing’ targeted attacks within the following 12-18 months.

Furthermore, the use of altered media to create alibis or false narratives can obstruct the investigative process. Criminals and their legal representation may exploit the plausible deniability introduced through the use of Generative AI and Deepfake technology to cast doubt on their involvement in crime by challenging the content of a prosecution’s digit. Becoming increasingly important in the justice process, digital evidence combined with plausible deniability introduces an element of doubt in the evidence unless the chain of custody can be proven.

Law enforcement investigations

Insurance Investigations and Misinformation

The insurance industry is not immune to the far-reaching implications of deepfakes. As insurers rely heavily on evidence, including visual documentation to assess claims, the potential for fraudulent manipulation through synthetic media poses a significant threat. Individuals seeking to exploit insurance policies may resort to using manipulated media to support false claims, making it challenging for insurance investigators to discern fact from fiction.

As previously reported, voicemail impersonation is occurring, asking for invoices or payments to be made under the guise of a bad actor. This is having dangerous implications for cyber insurance, especially now as AI destabilises the truth and prevents digital media integrity.

Voicemail impersonation

This new frontier of deception complicates the claims process and may result in increased costs for insurance companies. To mitigate these risks, the industry must adapt by incorporating advanced authentication measures and employing AI-driven tools to detect potential deepfake content within claims submissions.

Combating Disinformation and Finding Truth

As the prevalence of deepfakes continues to rise, there is an urgent need for a multi-faceted approach to tackle this issue. Technological advancements in AI-based detection tools are essential to identify manipulated content effectively. Additionally, education at all levels and raising awareness about the existence and potential impact of doctored media can support individuals to critically evaluate the information they encounter.

Aviv Odaya, a Harvard University affiliate and AI expert said on the subject of tech companies preventing fake media, “They could watermark audio to create a digital fingerprint or join a coalition meant to prevent the spreading of misleading information online by developing technical standards that establish the origins of media content.”

The impact of people using deepfakes and AI as a scapegoat to distort reality and achieve plausible deniability echoes across society, affecting trust, accountability, and various sectors, including law enforcement, justice, and professional investigations. As technology continues to advance, an effort must be made to ensure that the information captured at source is tamper evident and its integrity can be assured to combat challenges to its authenticity.

Get in touch today if you would like to protect the authenticity and integrity of your digital assets, and demonstrate the digital chain of custody.