Deepfakes: Strengthening business resilience against the unknown - Part 2 - Improving cyber policies and procedures

Image related to Deepfakes: Strengthening business resilience against the unknown - Part 2 - Improving cyber policies and procedures

Improving cyber policies and procedures

By Chris Burton, 16 June 2025 

Given current trends in the development of AI technology, deepfakes are likely to continue as a major threat to corporate security and financial stability. According to recent research by finance software provider Medius, more than 50% of businesses in the US and the UK have experienced deepfake-related fraud, and 43% of them have become a victim of it. [1]

Several factors have led to an increase in the use of deepfake fraud, including the growing use of social media and even podcasts. This has been coupled with low entry barriers to AI-powered deepfake tools that are readily available for purchase from the internet, even by amateur fraudsters, who can use them to create very realistic fake videos and audio recordings. 

With the risks increasing, it is crucial for businesses to focus on preventative measures to avoid a deepfake attack and damaging financial and reputational consequences.

Although continuous education of employees is a good place to start in building a company’s resilience to deepfakes (as discussed in Part 1), firms should also have a multi-layered approach to defend against deepfake-related cyber threats. 

Below are several measures firms can take to strengthen their cyber policies and procedures against the growing threat of deepfakes.

Robust verification protocols

To prevent fraud, organisations should establish strict verification processes. All sensitive transactions should have established Multi-Factor Authentication (MFA) in place. Implement a Dual Authorisation policy where two employees are required to verify financial transactions.

In addition, organisations should make it mandatory that any high-value transactions are only conducted in-person or via secure video verification. Video calls should only be taken through secure channels to confirm identity.

Using AI-based deepfake detection tools

To counter the increasing amount of manipulated content, businesses should be investing in AI-based detection tools. These tools work by analysing patterns and inconsistencies in video and audio recordings, in order to determine the authenticity.

Enhanced cybersecurity measures

Organisations must enhance their cybersecurity measures to ensure the protection of their digital assets. Endpoints are critical systems that should be protected by strong endpoint security to prevent unauthorised access to sensitive data, and only authorised personnel should be able to interact with them. 

AI-driven threat detection can also be used to analyse network traffic in real-time to identify any unusual or malicious activity. It is also important to conduct penetration testing at regular intervals to discover the vulnerabilities that can be exploited by cybercriminals, before they do so. 

The security of sensitive communications is also enhanced through encryption, which provides an additional level of protection so that even if messages are intercepted, they will be unreadable to unauthorised individuals.

Legal and compliance safeguards

With ever-changing legal frameworks concerning deepfake risks, organisations need to adapt their policies to them. Companies should include deepfake-related fraud clauses in their contracts. 

Legal advisors and of course compliance teams have a role to play in helping firms stay up to date with AI regulations, and to enforce clear ethical guidelines on AI usage within the company.

Robust incident response plan

Deepfake attacks can be very costly, but a well-thought-out response strategy can help reduce the financial impact.

In anticipation of any such attack, organisations should proactively identify a particular response team to manage the crisis effectively, develop pre-drafted statements that can be used to counter misinformation quickly, and establish clear communication channels to ensure that responses are both swift and coordinated.

Review cyber insurance

Cyber insurance policies should be reviewed to make sure they cover deepfake fraud. Coverage should include: social engineering attacks, protection against reputational damage, legal costs, and financial compensation for any business disruptions caused by deepfake incidents.

The future

As AI develops, deepfake threats are expected to rise, so businesses must remain vigilant and update their cybersecurity posture accordingly. In fact, it is anticipated that deepfake scams will become so sophisticated that they will be almost impossible to detect by the untrained eye.

Future developments may offer more accurate detection tools, such as using blockchain proof techniques to increase security. More stringent AI rules may also be put in place by governments and regulatory bodies to prevent the misuse of deepfake technology. In the meantime, more coordination between tech companies and policymakers will be useful to prevent AI-based fraud on a larger scale. 

Deepfake technology will only continue to advance, and hence it poses a growing cybersecurity risk for businesses around the world through financial scams and damaging reputations. However, these risks can be minimised by focusing on three core areas: the thorough training of employees, the use of sophisticated detection technologies, and the strengthening of policies and procedures. 

About the author

Chris Burton is Head of Professional Services at Pentest People.

Learn more about tackling new financial crime techniques, including deepfake fraud, with our ICA Specialist Certificate in Evolving Risks in Financial Crime Technology.