Are we guilty of double standards on AI?

Image related to Are we guilty of double standards on AI?

In association with:

Genpact Logo

By Omer Nisanci

Is it possible that we are holding AI to an impossibly high standard?

There’s a lot of noise in the industry on the use of AI in financial crime compliance, most notably concerns about explainability, bias and governance. These are legitimate considerations. But we need to be honest with ourselves and ask: are we holding AI to a far higher standard than we hold human analysts? And if we are, how do we reconfigure our expectations? 

Across the industry, people are trained across a number of weeks, or even just a matter of days, to make complex judgements on customer risk, transaction patterns or potential suspicious activity. These decisions are often based on incomplete data, subjective interpretation and varying levels of financial crime expertise. Yet we’ve learned to live with the inconsistency, unconscious bias and lack of transparency simply because we’re used to it.

When AI enters the picture, however, we demand perfect logic, fully traceable outcomes and guaranteed fairness. But if we accept that a human team working at speed and scale can only do so much, and may even make mistakes, why isn’t a machine – which is operating with far greater consistency – given the same benefit of the doubt?

Of course, we shouldn’t adopt AI blindly. The key is balance. AI should be viewed as a copilot, not a replacement. It isn’t here to remove judgement, but to enhance it. Like humans, AI systems are shaped by the data and processes that support them; in other words, they reflect the quality and clarity of their inputs. That’s why we must interrogate our data, understand its strengths and limitations, and ensure we’re not just automating poor practices. At the same time, we should recognise that AI can bring structure, repeatability and a level of transparency which is often absent from human-led decision-making.

A case in point

Consider the gains achieved by a digital clearing and custody firm that partnered with Genpact. Its financial crime system was drowning in inefficiencies, with 80% of alerts proving false positives, eating up countless analyst hours. We implemented our cloud-based financial crime suite, cutting false positives by 45% and speeding up investigations by 60%. This wasn’t just about improving key performance indicators. Employee morale climbed as analysts shifted focus to tasks requiring critical thinking and business impact.

Toward AI-led, human-powered operations

If we truly want to scale up our financial crime defences, we need to scrutinise our current operations as rigorously as we critique new technologies. We must define ownership, educate our teams on AI loopholes and evaluate trade-offs. 

AI isn’t flawless – but it’s very effective, and is primed to drive meaningful transformation in ways seldom seen before. We are moving towards better value than that for which we’ve previously settled, and for that value to manifest itself, we need to think more clearly about how human and machines can work together. 

Omer Nisanci is Partner at Genpact’s FCRM Practice. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through its deep business knowledge, operational excellence, and cutting-edge solutions – Genpact helps companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, its teams implement data, technology, and AI to create tomorrow, today. Get to know Genpact at genpact.com and on LinkedIn, X, YouTube, and Facebook.