This article is a free excerpt from inCOMPLIANCE, ICA's bi-monthly, member exclusive magazine. To gain access to more articles like this, sign in to the Learning Hub or become a member of ICA.
Adam Khan delves into Financial Action Task Force findings on growing AI and deepfake threats, and outlines critical countermeasures to put in place.
For decades, compliance professionals have relied on a simple assumption that identity can be verified, documents can be authenticated, and human interaction can be trusted as evidence of legitimacy. With the rise of artificial intelligence, those assumptions are being tested, and with that the compliance and anti money laundering (AML)/counter financing of terrorism (CFT) community will need to adapt.
Horizon scanning
The Financial Action Task Force (FATF) has answered the call with their 2025 Horizon Scan on Artificial Intelligence and Deepfakes, which considers the AI impacts on money laundering, terrorist and proliferation financing. It makes clear how this new technology is being adopted to enhance criminal activities that impact negatively upon the financial industry.
The horizon scan frames AI as a dual-use technology: on the one hand, it enables new forms of fraud and customer due diligence (CDD) circumvention, and on the other it can also strengthen AML/CFT and counter proliferation financing controls. The paper is a product of FATF’s Working Group and Plenary roundtable meetings to explore these issues, highlighting many risks with regard to AI and Gen AI technology.
Deepfakes are identified as a major concern. With modern society’s reliance on biometric verification, there are many risks, especially given the technological lag in AML systems, in addition to cross-border complexities.
The interconnected nature of the global financial system allows criminals the opportunity to exploit weaknesses in AML regimes. This is because we live in a world that has accepted digital and remote identities as a norm. The question however is not to deny technological realities, but to embrace them as much as the criminals have and in a way that enables and empowers compliance, AML and law enforcement.
Compliance professionals have relied on a simple assumption that identity can be verified, documents can be authenticated, and human interaction can be trusted.
Compliance professionals have relied on a simple assumption that identity can be verified, documents can be authenticated, and human interaction can be trusted.
Arms race
An AI arms race between criminals and professional AML systems has emerged. Quoting Europol, the FATF paper classifies bad actors into two types. Firstly, low skilled offenders who benefit from “off the shelf AI tools”, enabling them to carry out sophisticated attacks, for example with deepfake tech. Secondly, highly skilled cyber criminals who use AI to boost and automate their attacks.
This means that the AI infrastructure and nature of AI technology has lowered the entry barriers for malicious actors and criminals.
This new AI enhanced cybercrime has many dimensions. AML systems are particularly vulnerable to adversarial attacks. Language-based attacks and adversarial prompts can only be launched against large language models (LLMs) and Gen AI. Many adversarial attacks require little or no technical background at all. With LLMs for example, the skill on the part of the attacker is in the level of language manipulation and prompt injection. This risk will become even more prevalent with the increasing adoption and deployment of AI agents.
Deepfake fraud is a major risk in the way that it can subvert reality, by taking social engineering and misinformation to new levels. FATF highlights senior level payment fraud as a key risk area, along with romance scams, market manipulation, and the creation of synthetic identities to subvert CDD.
Furthermore, this kind of fraud is hard to track due to its crossborder nature, using foreign tools, infrastructure and virtual assets. Gen AI can be weaponised to produce realistic synthetic documents to facilitate fraud and money laundering. This can particularly help at the layering stage of money laundering, with the generation of fake invoices and other documents to disguise illicit funds.
Bypassing systems
AI agents can add an arsenal of options for high tech criminals to bypass detection systems and increase the scale of their cyberattacks. For example, agents can automate online purchases, deposits, the recruitment of mules, as well as automate micro transactions into mule accounts. Gen AI and LLMs with large datasets can also be used to map optimal laundering routes, analyse the regulatory environment, enforcement behaviour, and identify weak jurisdictions.
As a response to this situation, FATF highlights the importance of maintaining the integrity of CDD and onboarding processes with more advanced ID verification tools. These include enhanced biometric authentication and liveness checks, which can ascertain whether the user is real and physically present, and not a product of AI-powered manipulation. Other technology with stronger multi-factor authentication, AI-driven detection systems and enhanced transaction monitoring are also recommended.
The paper also covers the growing role of advanced analytical tools in financial crime investigations. It discusses the potential of combining traditional investigative techniques, such as forensic accounting and intelligence gathering, with new technological capabilities, including AI-powered forensic tools, deepfake detection software, and blockchain analytics to trace virtual asset flows.
FATF recommends that jurisdictions develop specialist institutional capabilities with dedicated cybercrime units focusing on technology enabled financial crime. Such units and teams will need to possess knowledge on AI and have a mindset geared towards the reality of evolving technologies.
The AI infrastructure and nature of AI technology has lowered the entry barriers for malicious actors and criminals.
The AI infrastructure and nature of AI technology has lowered the entry barriers for malicious actors and criminals.
Keeping pace
The report’s authors emphasise that the response to AI-enabled financial crime must remain adaptive. Detection techniques that work today may quickly become outdated as Gen AI tools evolve. The report stresses the importance of continuous monitoring, technological innovation, and regular reassessment of risk, as well as awareness campaigns, ongoing training, and investment in investigative expertise. This is to ensure that institutions and authorities can keep pace with the changing threat landscape.
However, the adoption of new technology for defence alone is not sufficient. Professionals themselves will need to be able to identify inconsistencies which automated systems overlook. Human expertise in the compliance and investigative field will always be valuable, and additional training is needed so that relevant staff can recognise the indicators of synthetic media.
The other important human element is the power of collaboration. FATF emphasises the critical role of public-private partnerships, as financial institutions, regulators, law enforcement agencies and technology providers all hold pieces of the intelligence needed to understand evolving fraud techniques. But collaboration should also extend to academia and other industry partners.
Countermeasures
Many of the suggestions by FATF depend upon the responses and efforts by regulators and enforcement, as well as organisations to operationalise measures, policies and plans to meet the new risks. This would obviously require buy-in from the C-suite.
In the meantime, criminals do not have such limitations slowing implementation of new technologies into their scams and attacks. So, what can compliance professionals do right away to counter this?
- Review and strengthen digital onboarding control
Professionals will need to reassess the digital identity verification process to assure that defence strategies are robust when dealing with synthetic identities and deepfake manipulation. There may need to be stronger multi-factor authentication, as well as layered verification. They will also need to look at methods and tools which can detect live usage of AI-generated avatars, puppets, and behavioural biometrics. - Enhance behavioural and transaction monitoring
Monitoring frameworks in detecting fraud schemes will also need to be looked at, and how they combine with synthetic identity usage. Indicators to consider include rapid transactions through new accounts, immediate withdrawals after deposits, multiple accounts linked to the same device, and activity which is inconsistent with a customer’s profile. The compliance teams will need to work closely with fraud and cybersecurity teams. - Train staff to recognise AI-enabled deception
Provide targeted training for compliance analysts, fraud teams and customer-facing staff on deepfake and AI-enabled fraud indicators. Compliance professionals should never assume that people know how deceptive AI technologies are being used for crime. - Improve cross-functional collaboration
Ensure stronger information sharing between compliance, fraud, cybersecurity and investigation teams. This is needed now more than ever before. Cross-functional meetings should be encouraged and organised, as well as groups and communications channels established to make the transmission of information efficient. Public-private partnerships will need to be set up to help share intelligence which extends outside a financial institution. - Strengthen oversight of technology vendors
It is important to review the capabilities of vendors providing identity verification, fraud detection, or document authentication tools. Experts should confirm whether the products have deepfake detection capabilities and controls for synthetic media manipulation. - Updated related risk assessments
Compliance professionals will need to ensure internal risk assessments consider AI-related risks in all their various forms. This includes looking at the specific risks of machine learning systems, LLMs, adversarial attacks, prompt injection, data poisoning, AI misalignment issues, as well as deepfake manipulation. Risk assessments should also address how deepfake and synthetic media attacks can be used for crimes and campaigns which affect the financial industry other than through fraud, such as via misinformation and market manipulation. - Build an inventory of tools
Conducting research on types of AI-powered defence and security will be very important. It is crucial that compliance professionals actually know what the use cases are. Below are some examples:- ID verification and liveness detection tools: can involve device fingerprinting, document authenticity checks, and presentation attack detection to expose attempts to trick biometric systems with synthetic content.
- Deepfake and synthetic media detection tools: can specifically analyse video, audio and images, including reading metadata and watermarks, to flag AI-generated media.
- Behavioural biometric and device intelligence tools: deal with synthetic identities that manage to get through onboarding by monitoring the customer’s activity.
- Fraud analytics and anomaly detection tools: offer real-time monitoring and graph analytics for mule networks.
- Blockchain analytics and virtual asset tracing tools: enable investigation into crypto transfers, virtual asset service providers (VASPs), and digital wallets.
- AI-powered forensic investigation tools: analyse large volumes of cases, identifying patterns and making connections among fraud and money laundering networks to aid law enforcement.
- Internal AI controls for LLMs and agents: with specific prompt injection filtering tools, guardrails and the use of AI ‘red teaming’ platforms.
- Content provenance and authenticity tools: identify the origins of synthetic content.
- Defence tools for adversarial attacks: protect AI and ML systems against model drift and apply adversarial testing for related risks.
While it is not necessary for compliance professionals and risk managers to become experts in AI, it is clearly worthwhile to develop greater awareness and knowledge of evolving technology. In particular, understanding how AI is being deployed, and what specific risks can exist within security threats and financial crime. In addition, professionals must keep themselves abreast of regulatory responses, and any new technological capabilities for malicious actors.
About the author
Adam Khan is an AI ethics, governance and risk expert, Lead Writer and Product Designer at Xperientia, and subject matter expert for the ICA Specialist Certificate in AI for Compliance Professionals.