Skip to main content

The 2024 Global Risks Report by the World Economic Forum (in partnership with Zurich Insurance and Marsh McLennan), identified that the most severe global risk in the short term is misinformation and disinformation. The general impact on society will affect the way we consume news, social media and content, as what we perceive to be true may in fact be doctored. Aside from the societal impact, disinformation has the power to change verdicts, damage investigative proceedings and create plausible deniability. In this article we will dive into the 2024 Global Risk Report, why disinformation is the most severe global risk and what can be done to mitigate its impact.

What is Misinformation and Disinformation?

First off, let’s classify what these words mean. In the Global Risks Report it is defined as, ‘Persistent false information (deliberate or otherwise) widely spread through media networks, shifting public opinion in a significant way towards distrust in facts and authority. Includes, but is not limited to: false, imposter, manipulated and fabricated content.’ Although disinformation can be used for public consumption, doctored information can be used on an individual or private level, and is something we’ve seen used for deepfake scams and phishing attacks. Earlier this year an employee was tricked into sending millions of dollars by fraudsters using deepfake technology, which they did by impersonating the CFO of the company.

Generative AI and Disinformation

The core reason why disinformation is the most severe global risk is due to the rapid advancement of generative AI, a technology capable of producing content that is indistinguishable from authentic sources. This groundbreaking technology allows people to create highly convincing media, making it increasingly challenging to separate fact from fiction. As we navigate this digital age, the lines between reality and deception blur, and the consequences are far-reaching.

In our recent article, AI and its Distortion of Reality, we discuss how fake media has become so accurate that people are now able to refute the authenticity of media with plausible deniability, creating what is known as a ‘liar’s dividend’.

Disinformation and its Impact on Investigations

In the digital age, criminal investigations increasingly cross with the virtual world, where disinformation thrives in the form of manipulated evidence. Cybercrime, in particular, presents unique challenges for law enforcement agencies, as criminals exploit digital platforms to plan sophisticated fraud schemes and distribute misleading information. From phishing scams to ransomware attacks, the spread of disinformation increases the complexity of cyber investigations, requiring specialised expertise and advanced technological tools to navigate the digital landscape.

In the realm of insurance fraud investigations, disinformation also plays a pivotal role in fraudulent claims. Dishonest policyholders exploit loopholes in the system, fabricating or exaggerating losses through altered evidence and misleading statements. Insurance investigators tasked with uncovering fraudulent claims face a difficult challenge in distinguishing genuine losses from fake claims. Disinformation tactics such as staged accidents, inflated damage claims, and fake medical records further complicate the investigative process, requiring thorough analysis to uncover the truth.

The Influence of Social Media

As we begin to tackle the prevalent threat of digital disinformation, it becomes essential to understand the role of social media platforms in the spread of misleading content. These platforms, initially designed to connect people and advance communication, have become places of misinformation. The algorithms that govern our online experiences often prioritise sensational content over accuracy, creating echo chambers that reinforce pre-existing beliefs and contribute to polarisation.

Disinformation as a Security Risk

As discussed in the Global Risks Report, with technological advancements in generative AI and machine learning, we will begin to witness cybercrimes on a scale not yet seen. ‘Over the longer-term, technological advances, including in generative AI, will enable a range of non-state and state actors to access a superhuman breadth of knowledge to conceptualize and develop new tools of disruption and conflict, from malware to biological weapons.’

This has major ramifications for law enforcement and investigators as more crime takes place online using powerful tech, a more unified effort is required to reduce and prevent these crimes.

What is the Solution?

To prevent the spread and use of disinformation, the Global Risks Report state some suggestions which can take place at various levels. Individually, collectively, locally or internationally.

  • ‘Localized strategies leveraging investment and regulation can reduce the impact of those inevitable risks that we can prepare for, and both the public and private sector can play a key role to extend these benefits to all.’
  • ‘The collective actions of individual citizens, companies and countries may seem insignificant on their own, but at critical mass they can move the needle on global risk reduction.’

Education and Preventative Measures

To ensure individuals are equipped with the right tools and knowledge to navigate information online, media literacy programmes should be integrated into the education system and by governments to equip individuals with critical thinking skills. By educating individuals to recognise and distinguish between credible and suspicious sources, we can build a society that is less susceptible to those seeking to exploit the vulnerable.

Within technology, the development of advanced tools to prevent disinformation is vital. Whilst disinformation detection tools are a noble pursuit, AI will always be one step ahead of detection tools, as it continues to advance at breakneck speed. Therefore, it is imperative to use tools which can prevent disinformation from the outset. One example of this includes securing data in Blockchain, providing a tamper evident solution. (See MeaConnexus and MeaFuse).

Beyond the technological and educational fronts, promoting a culture of responsibility is essential. Individuals must recognise their role in shaping the internet and actively participate in the fight against misinformation. This involves questioning information, fact-checking before sharing, and advocating for transparency in online spaces.

Governments, tech companies, and individuals must unite in a mutual effort to safeguard the integrity of information and protect the foundations on which the digital world was made. Only through preventative software, proactive measures, stringent regulations, and a commitment to digital literacy can we navigate the disinformation era.

About Mea Digital Evidence Integrity 

The Mea Digital Evidence Integrity suite of products has been developed by UK based consultancy, Issured Ltd. Benefitting from years of experience working in defence and security, Issured recognised the growing threat from digital disinformation and developed the Mea Digital Evidence Integrity Suite of products to ensure digital media can be trusted.
MeaConnexus is a secure investigative interview platform designed to protect the evidential integrity of the interview content. With features designed to support and improve effective investigations, MeaConnexus can be used anytime, anywhere and on any device, with no need to download any software.
MeaFuse has been designed to protect the authenticity and integrity of any digital media from the point of capture or creation anywhere in the world. Available on iOS, Android, Windows and MacOS MeaFuse digitally transforms the traditional chain of custody to ensure information is evidential.

Disclaimer and Copyright 

The information in this article has been created using multiple sources of information. This includes our own knowledge and expertise, external reports, news articles and websites.
We have not independently verified the sources in this article, and Issured Limited assume no responsibility for the accuracy of the sources.
This article is created for information and insight, not intended to be used or cited for advice.
All material produced in the article is copyrighted by Issured Limited.

Interested in Hearing More? 

To receive regular updates and insights from us, follow our social media accounts on LinkedIn for Mea Digital Evidence Integrity and Issured Limited.
Additionally, sign-up to our Mea Newsletter to receive product updates, industry insights and event information directly to your mailbox. Sign up here.
View our other articles and insights here.