This article is a free excerpt from inCOMPLIANCE, ICA's bi-monthly, member exclusive magazine. To gain access to more articles like this, sign in to the Learning Hub or become a member of ICA.
Neil Jennings offers guidance on how to evaluate whether AI is the appropriate tool for the job.
There is a problem-solving principle known as Occam’s Razor, which (roughly) states that the simplest explanation is the one that requires the least amount of information to describe it. This is especially true when there are many explanations that could potentially fit all the available facts.
This article aims to provide a solid entry-point for AI risk structure and will examine the following key themes.
- What does ‘useful’ mean in financial crime compliance?
- How can we assess our need for AI and other tools?
- How mature is our vendor risk management process?
- What key principles should we use to govern AI use?
In my previous article, I shared a ‘baseline AI governance starter kit’, a practical checklist to assess gaps across AI risk domains. In this piece, I take a step back: rather than a full how-to guide, or a detailed examination of AI use cases, it focuses on the decision point before implementation, helping compliance leaders evaluate whether an AI tool is truly needed based on corporate, strategic, and regulatory needs, and if so, how to ground its adoption in clear principles.
The ‘why’ behind the AI
Wherever humans have assigned value or traded goods, there have been fraudsters, cheats, and thieves. And with the Internet came hackers and data breaches.
As such, we don’t tend to view the digital world as a ‘high trust’ society. Fraud, money laundering, and tax evasion are more sophisticated and specialised than they used to be, and so are the tools we use to detect and prevent such activities.
Simplicity is an often-overlooked principle when it comes to the usefulness of compliance controls. Occam’s Razor reminds us to cut through the noise. Instead of chasing every new tool, we should ask if we really need it, if it will be useful, and if we actually need AI to catch the criminals. Perhaps more importantly, the concept of simplicity also helps us focus on delivery more than promises, helping us avoid marketing hype and inflated claims.
Simplicity is an often-overlooked principle when it comes to the usefulness of compliance controls.
AI is not a silver bullet
We are all aware that AI is not perfect. There is a long list of examples, some of which are trivial (like the poorly AI-drawn map of Europe you might have seen on social media) and some are more serious (like the lawyers who fail to spot fake, AI-generated citations).
But without doubt, AI has already shown it is highly effective when it comes to combating financial crime, and its technical capabilities are only growing. Let’s take Suspicious Activity Reports (SARs) as an example. In 2019, the UK government’s Economic Crime Plan identified a need to move away from the high-volume, manual SARs due to system strain – at that time, around 460,000 SARs were submitted. By 2024/25, the number had risen to over 860,000, according to the National Crime Agency’s Annual Report.
Clearly, AI’s ability to identify patterns and reduce noise is exactly the kind of capability that could help relieve this burden. For compliance teams, that might mean more consistent and effective SARs, or faster identification of suspicious activity in the first place. For financial intelligence units (FIUs), it could reduce the lag between receiving and actioning SARs, and even enable more timely feedback to the sender. In short, AI can strengthen both the quality of SARs and the outcomes, supporting regulators in their mission while helping compliance teams detect and prevent financial crime, and defend their organisations against AML liability.
One point to remember is that AI and automation are not the same thing. They serve different purposes. AI is trained on vast amounts of data, and essentially calculates likelihood based on patterns, such as unusual spending behaviours. Automation, by contrast, relies on pre-defined if/then rules and specific thresholds. Think of the difference between an AI tool that flags suspicious activity or suggests a SAR be filed, versus an automated alert triggered when a transaction comes from a particular jurisdiction or exceeds a certain value. Both approaches are useful, but not every financial crime tool is ‘AI’. That’s not a weakness, simply a distinction.
AI and automation are not the same thing. Watch the full video here.
Part 1: Know your needs
Not all compliance teams have identical requirements. This is an important part of understanding and planning an effective assessment method to drive fully informed AI decisions.
There is an AI tool for most needs, whether it’s AML, fraud detection, or anything else.
But the fact that the tools exist is irrelevant. The true starting point is to undertake a know your needs (KYN) assessment. This allows you to gain clarity regarding your specific industry, regulatory landscape, and current and emerging risks. KYN doesn’t seek to answer whether you can use an AI tool, rather, it answers whether you should, by looking at:
- your specific business
- your goals
- your regulatory requirements
- the desired end-results for the AI tool
- tools that are fit-for-purpose
- vendors that you can work with.
The KYN checklist
The counter-intuitive starting point for using any AI tool is to forget about the tool. Instead, undertake a KYN assessment. There is no one-size-fits-all approach to this, as compliance teams and regulatory environments are rarely identical. When you map your needs, think about four areas: your landscape, your processes, your tools, and your gaps. This will prompt discussion and shed light on the following key issues.
Landscape
- Industry
- Relevant jurisdictions
- Relevant regulatory requirements
- Relevant regulator(s)
- Specific financial crime functions (e.g. fraud detection, know your customer, SARs, etc.)
Processes
- Current processes
- Current team(s) involved
- Metrics used to measure effectiveness
Tools
- Are current tools fit for purpose?
- Are current tools simple to use and efficient?
- Do relevant teams have capacity to learn and integrate new tools?
Gaps
- Current pressure points (processes, effectiveness, regulatory intervention, etc.)
- Key areas of improvement prioritised
- Cost-benefit analysis
Of course, the above can be adapted to your requirements.
The counter-intuitive starting point for using any AI tool is to forget about the tool.
Part 2: The principles-based risk assessment
Undertaking a KYN assessment comes first, but no AI tool should be implemented without a clear grasp of the legal foundations. Some laws are the bedrock for compliance teams, such as the Proceeds of Crime Act that underpins AML processes and the submission of SARs. Others are directly relevant to AI or adjacent areas, such as General Data Protection Regulation (GDPR), the EU AI Act, or the Digital Operational Resilience Act (DORA). And still others may only become relevant in the future.
Within the financial crime context, both GDPR and the EU AI Act contain carve-outs that give compliance teams some flexibility. For instance, AI tools used to detect financial fraud currently fall outside the EU AI Act’s definition of a ‘High-Risk AI system’ and are therefore not subject to its full set of high-risk requirements. By contrast, organisations in scope of DORA may face incident response obligations, even if their fraud-detection tools are exempt from the
AI Act.
I previously wrote about the chaotic nature of the regulatory landscape. The same applies to AI governance and risk management frameworks: it is easy to get lost in the noise, and just as easy to fall into the ‘diminishing returns’ trap when deciding how far to go in governing AI internally.
With that in mind, and to avoid diminishing returns, the below list can be used as a robust starting point in the pursuit of responsible, transparent, and appropriate AI use. These are essentially the common threads among frameworks like OECD, ISO/IEC, NIST, and others.
Principle #1: Purpose
- Is each AI tool fit for purpose
- Does the tool align with your prioritised needs in relation to landscape, processes, current tools, and gaps?
Principle #2: Transparency
- Is your use of AI clear to all relevant stakeholders?
- Could any AI output potentially be confused for human output?
- Would customers and regulators understand where and how AI is used?
Principle #3: Explainability
- What types of decisions are being made?
- Can you demonstrate why a specific output was provided?
- Can you explain the logic, especially compared to other outputs?
Principle #4: Risk of harm
- What is the tool meant to do, and how?
- Who might be impacted negatively, and how?
- What is the likelihood of that impact?
Principle #5: Data integrity
- Do you know the data source?
- Is the data quality sufficient?
- Do you monitor for data security issues like drift and poisoning, and for privacy and confidentiality issues?
Principle #6: Human involvement
- Are there relevant humans across the process?
- Do they understand their remit?
- Do you have effective escalation processes in place?
Principle #7: Vendor risk management
- What is the general vendor vetting process like?
- Does legal and compliance currently get involved?
- Where is the supply chain exposure?
- Are you appropriately protected contractually? (Key issues like red alerts, breach protocols, auditability, etc.)
Principle #8: Monitoring & adaptability
- Is the tool working, and how do you know?
- Is it still working after 6–12 months?
- What metrics are you using, and are such metrics sufficient?
- Do you have a plan in place to terminate or use a different provider?
AI shouldn’t add complexity to compliance
Compliance leaders should be given the appropriate tools and information to consider their needs, and make risk aware decisions in using AI and other tools. There is a huge amount of good that can and will come of AI tools – the scale, volume, and technical capabilities are already impressive. Occam’s Razor tells us that the fact AI tools exist doesn’t mean they should be recklessly implemented, in the hope of seeing improvements.
As with all compliance initiatives, it is important to understand the complications, acknowledge organisational risk tolerance, and be prepared to adapt or update when needed. That way, we are doing our best to remove the guess work, and save time and resources in the long run.
The principle of simplicity prompts us to remember that the simplest path isn’t necessarily ‘AI everywhere’, but fit-for-purpose tools guided by clear principles. Compliance leaders can and should be at the forefront of discussions about AI tools.
We play a critical role in both the KYN and principles-based risk assessment. It can’t be done in isolation, and requires input from other teams – leadership, technical and legal. Done right, using AI to combat financial crime will make things simpler, cleaner and safer. As long as we start with needs and govern with principles.
This article is intended as informational and not as legal advice.
As with all compliance initiatives, it is important to understand the complications, acknowledge organisational risk tolerance, and be prepared to adapt or update when needed.