Skip to main content

Shallowfakes: What Are They and How Do They Work?

In an era dominated by digital innovation, the rise of deepfakes has gained significant attention for their ability to manipulate media. However, lurking in the shadows is a less sophisticated yet equally deceptive sibling – the shallowfake. Widespread and hidden in plain sight, shallowfakes are much easier to create, posing a significant threat to digital media and information integrity. This article explores what shallowfakes are, types of shallowfakes, real-life instances, and strategies to identify and combat them.

What are Shallowfakes?

Shallowfakes, unlike their more advanced counterparts (deepfakes), don’t use deep learning algorithms. Instead, shallowfakes are manipulated media content created using easily accessible tools, such as basic video, photo, or audio editing software. While deepfakes use artificial intelligence to blend and overlay faces, voices, or actions onto existing footage, shallowfakes rely on basic editing that doesn’t require much expertise. Due to the complexity and accessibility of generative AI, deepfakes are a modern problem. In contrast, shallowfakes have been created and employed for much longer, and their premise has existed for as long as basic editing tools have been available on computers.

Forms of Shallowfakes

Photoshopped: This type of shallowfake involves using photo editing tools to alter the media. This could include superimposing another image on top, cropping the image, removing elements of the image etc.

Recontextualised: This shallowfake doesn’t change the media itself, but captions or describes the media with altered context.

Audio/video edited: Similar to photo editing, but instead using audio/visual editing software. This can entail splicing clips together, chopping up audio or adding fabricated audio over video footage. For example, during a recorded interview, a person’s positive response to a question can be easily replaced with a negative response. This small adjustment can drastically alter the context of the video or audio, leading to potentially harmful implications.

Shallowfakes - editing images

Shallowfakes in Action

As more and more media is transferred digitally, from invoices to social media photos, this opens up the possibility of shallowfakes making their way into the public eye.

The applications of shallowfakes are diverse, ranging from innocent social media usage to more harmful purposes like misinformation campaigns. One common method involves recontextualising videos, leading to fake narratives. Shallowfakes may also involve simple audio manipulations, like altering the tone or content of a speech to convey a message contrary to the original intent.

A well-documented shallowfake case involved US politician, Nancy Pelosi, appearing intoxicated during an interview. This was debunked, as the original footage was assessed against the shallowfake to demonstrate that the video and audio were slowed down to imply Pelosi was slurring her words.

Another notorious shallowfake case involves a CNN reporter supposedly in a heated exchange with President Trump. The reporter had their press pass credentials revoked when a video emerged of him aggressively taking back the microphone to ask the president a question. However, the video had been doctored and sped up to appear more violent, and as a result, the reporter’s White House pass was reinstated.

Insurance fraud continues to pose a serious threat to the industry, increasing by 73% in 2021. A person can commit insurance fraud by amending documents to change payment amounts or alter dates. Using simple photo editing, a person can crop out information or replace text or imagery. Shallowfakes typically fall under two categories in insurance fraud:

  • Supporting evidence (invoices, photos, relevant documentation etc.)
  • Proof of identity (passport, driver’s license, bank statements etc.)

Detecting and Preventing Shallowfakes

As shallowfakes become more prevalent, it is vital for individuals to understand what shallowfakes are and use the tools necessary to identify and mitigate the impact of fake media.

Pay close attention to the details in the media. Shallowfakes often show inconsistencies in lighting, shadows, or facial expressions that may not line up with the surrounding environment. As well as visual details, shallowfake audio may sound unnatural. Listen out for unusual pauses, abrupt shifts in tone, or mispronunciations that may indicate tampering.

Ensure that you cross-reference information with reliable sources to confirm the authenticity of media content. Shallowfakes often rely on the rapid spreading of false information, and fact-checking can help prevent the spread of this. To do this you can use detection tools, image reverse search, or software that has been developed to identify manipulated content. While these tools may not be completely accurate, they can serve as an additional layer of defence against shallowfakes.

Although detecting shallowfakes is a valuable endeavour, what if you could prevent shallowfakes from the outset? As much as technology can be used to develop and create ever more convincing shallowfakes and misinformation, the right technology, put to the right uses, can be used to create immutable audits of the digital media’s lifecycle from capture and creation through to its deletion or destruction. Preventative technology measures, demonstrating the authenticity of media, do not have to keep pace with being able to continually detect ever more convincing shallowfakes created by an increasingly capable generative AI.

Shallowfakes - editing audio

What Next for Shallowfakes?

The threat of shallowfakes poses a significant challenge to the authenticity and integrity of information. By understanding what shallowfakes are, how they are used, and adopting strategies to detect and prevent the spread, society can work together to protect the integrity of online content. We are all responsible when sharing and consuming digital media, so we must employ critical thinking to reduce the likelihood of sharing fake content. As technology continues to advance, staying aware and informed is vital in navigating the fake world created by shallowfakes.

Providing Evidential Integrity

In The Global Risks Report 2024, produced by the World Economic Forum, it is stated that the most severe global risk in the short term (next 2 years) is misinformation & disinformation. This is described as, ‘persistent false information (deliberate or otherwise) widely spread through media networks, shifting public opinion in a significant way towards distrust in facts and authority.’ This has lasting implications for society, leading to a culture of mistrust in anything presented as digital media.

Issured are aware of the threats of shallowfakes, deepfakes and digital disinformation. This is expected to become a threat to society, but one that we’re addressing specifically for law enforcement & insurance with our Mea Digital Evidence Integrity products.

Join Our Monthly Mea Digital Evidence Integrity Newsletter


Please complete this form to receive email updates and much more.
Contact Email *
First Name *
Last Name 
Organisation Name *
Industry/Sector 
*
*Required Fields
Upon registration, you consent to receiving our offers, promotions, and other commercial messages. You have the option to unsubscribe at any time.