Deepfakes: Strengthening business resilience against the unknown - Part 1

Image related to Deepfakes: Strengthening business resilience against the unknown - Part 1

How to spot a deepfake scam

By Chris Burton, 9 June 2025

The development of artificial intelligence (AI) has been on the fast track and has driven innovative advancements in different areas of everyday life. However, it has also brought new cyber threats, one of the most dangerous of which is deepfake technology. 

Deepfakes – using AI to manipulate videos, audios, and pictures – is being used increasingly by cybercriminals and has led to large scale financial fraud, corporate espionage and tarnishing of brands. In fact, deepfake attempts occurred every five minutes in 2024 according to Entrust’s 2025 Identity Fraud Report, highlighting it as one of the biggest growing threats to firms. 

Recent high-profile cases show this risk is only going to get worse. In 2024, cybercriminals used deepfake video and audio to impersonate a senior executive from engineering firm Arup and embezzled millions from the company. [1]

Other cases have been reported against FTSE listed companies with CEOs being made to authorise fraudulent financial transactions [2]. As deepfake technology continues to develop, companies must actively enhance their cyber resilience to these risks.

Building deepfake scams

Cybercriminals are able to use AI-based technology to create very realistic deepfake content that is incredibly difficult to identify as real or not. Deepfake scams use video, image and audio manipulation, mostly on social media content that is already available online. In some way, without knowing it, we’ve been helping to fuel the problem by contributing all of this content to the web that criminals can exploit for scams. 

Below are some examples of such AI-based scams:

  • Deepfake audio and video fraud – Cybercriminals forge videos or voice recordings of executives instructing employees to transfer money or give out sensitive information.
  • Phishing and social engineering – Using AI-generated content can make phishing emails look more real and less likely to be detected. 
  • Fake customer service and Interactive Voice Response (IVR) scams – AI-powered chatbots and IVR systems can fool people into thinking they are talking to company representatives and get them to reveal their details. 
  • Misinformation and corporate sabotage – False deepfake content can impact the price of shares, ruin the reputation of companies or management, and create discord in organisations. 

As AI technology improves, these scams will become even more difficult to detect. However, there are some signs to look out for.

Deepfake warning signs

Organisations looking to strengthen their resilience against this developing type of cyber threat should prioritise informing and educating employees about deepfakes. This should include training on how to recognise deepfakes in order to prevent fraud, misinformation, and social engineering attacks. 

Below are some key signs to look out for that may indicate a deepfake:

  • Unnatural eye movement and blinking – AI is still bad at replicating human eye movements and actions, which can make people’s eyes look strange, blink differently, or can move their eyes in a way that doesn’t look natural. This happens because deepfake tools generally focus on the face but not the eyes, giving the person a realistic but also strange gaze.
  • Facial and lip syncing irregularities – Lip syncing is one of the biggest challenges for deepfake technology. Despite the considerable progress in mimicking facial expressions, there is a significant delay or lack of synchronisation between the lips and the spoken words. This includes pauses in reaction or very slight distortions around the mouth when giving or receiving a message. It is especially important to be careful of video instructions or messages from the CEO when they seem to be slightly off in their delivery.
  • Unnatural body proportions and movements – The vast majority of deepfake tools focus on the face rather than the entire body, which results in some weird proportions or clumsy actions. This can lead to stiff posture, shouldered alignment, or gestures that do not look natural. It is important to look out for variations in body language as these may be the result of digital alteration.
  • Striking or annoying sounds – It is possible that deepfake audio contains some slight perturbations in the speech, which may include an unusual pace of delivery, a metallic voice, shifts in background noise, or sudden pauses that disrupt the conversation. It is important to be wary of audio messages that have limited emotional range, or incompatible background noises.
  • Inconsistent documents – Scammers use AI software to create convincing deepfake documents like fake IDs and invoices, to defraud people and organisations. To spot these deepfakes, check for inconsistencies in fonts, formatting, logos, or in signatures. Look out for unusual language, errors or discrepancies in official details in the document. This can apply to emails too. 

To counter deepfake threats, employees should be encouraged to trust their instincts and scrutinise digital communications carefully. If any anomalies are detected, they should immediately flag suspicious content for further verification, cross-checking with known authentic sources, or escalating concerns to cybersecurity teams. Employees should be encouraged to simply say no and not cave under pressure.  

Other sensible steps that can be taken when dealing with a potential deepfake from a colleague/senior executive is sending a confirmation or verification message to the person’s known mobile number (as held on the work database) to confirm it is actually them. If a victim is being told to send money, companies should have other controls in place, like two-factor authentication, to stop fraud.

By fostering a culture of vigilance and awareness, businesses can strengthen their resilience against deepfake-based fraud and deception.

Look out for Part 2 of this Insight on deepfakes, exploring further processes that businesses can implement to counter this growing threat.

About the author

Chris Burton is Head of Professional Services at Pentest People.

Learn more about tackling new financial crime techniques, including deepfake fraud, with our ICA Specialist Certificate in Evolving Risks in Financial Crime Technology.