AI and crypto compliance – keeping humans in the loop

Image related to AI and crypto compliance – keeping humans in the loop

This article is a free excerpt from inCOMPLIANCE, ICA's bi-monthly member exclusive publication. To gain access to more articles like this, sign in to the learning hub or become a member of ICA.

Ruan Botha and Jaco Janse van Rensburg discuss the importance of bridging technology and trust in the digital age

Blockchain technology continues to evolve rapidly and with more than 17,000 cryptocurrencies in existence, compliance has emerged as a pivotal concern for stakeholders. The decentralised nature of crypto poses unique challenges, particularly in the realms of know your customer (KYC), anti money laundering (AML) and countering the financing of terrorism (CFT) regulations. As the industry matures, regulatory scrutiny intensifies in parallel and organisations are beginning to understand the importance of safeguarding their ecosystems, customers and reputation.

A game changer

Electronic know your customer (eKYC) is a trust-building mechanism that ensures users are legitimate, i.e., they are real human beings that are registered on government databases and actually exist. This is usually accomplished by remote verification where electronic documents are uploaded and ‘liveness’ confirmed in seconds. As bad actors have historically attempted to circumvent eKYC platforms by uploading falsified documents and using deepfakes, AI is being harnessed to verify documents and pick up document tampering, perform biometric checks, and confirm a person’s identity through liveness detection. Furthermore, AI is being taught to clear false positive hits on adverse media and sanctions for prospective users.

AI has been a game changer in the realm of crypto compliance, it has revolutionised how eKYC is performed by enabling faster identity verification, fraud detection, and real-time risk analysis. However, we sometimes miss the importance of safeguarding the human element essential for trust and accountability. It is important to note that, even as AI automates many aspects of compliance, the complexity of human behaviour, the nuance of regulations, and the stakes involved mean that keeping humans in the loop remains critical.

In this article we explore the growing role of AI in eKYC, emphasising the importance of human oversight in AI-assisted compliance, and how a symbiotic relationship between machines and humans is shaping the future of regulatory technology (RegTech) in the crypto industry.

speech marks

Reliance on RegTech and AI alone raises critical questions about accuracy, accountability and ethical responsibility

The rise of AI in eKYC

eKYC, as part of a broader AML/CFT framework, involves verifying the identity of customers, understanding their financial behaviour, and assessing risks. In traditional finance, KYC is labour-intensive and prone to bottlenecks. In the borderless, fast-paced crypto world, these challenges are amplified. AI’s integration into KYC processes has revolutionised the way financial institutions and crypto platforms verify and monitor their customers. AI has emerged as a solution to these pain points by:

  • automating ID verification
  • enhancing verification processes
  • flagging suspicious behaviour
  • enabling real time monitoring, and
  • streamlining and speeding up onboarding.

These capabilities are not theoretical anymore. Leading platforms such as Ledn, Kraken, and EtherFi already use AI-powered KYC services, often provided by third-party RegTech firms like Jumio, Onfido, and SumSub. However, reliance on RegTech and AI alone raises critical questions about accuracy, accountability, and ethical responsibility.

Maintaining integrity and trust

Despite the capabilities of AI, the human element is crucial in maintaining the integrity and trustworthiness of compliance processes. While AI significantly enhances the efficiency and scalability of KYC processes, it is not infallible. AI systems can provide vast amounts of data and insights, but human judgment is essential in interpreting and acting upon this information. Algorithmic bias, lack of context, and false positives are real risks, especially when dealing with diverse global user bases, nuanced regulatory requirements and evolving threat landscapes. In our view human involvement is not only recommended but essential, including in the following examples.

1. Interpreting complex cases

  • AI excels at pattern recognition and statistical analysis, but it can struggle with ambiguous or ‘edge’ cases. For example: Documents from certain regions may not conform to typical formats, confusing optical character recognition (OCR) tools.
  • Human analysts bring cultural, legal, and contextual understanding that AI cannot replicate, allowing for better judgment in complex scenarios.

2. Ensuring fairness and reducing bias

  • AI models can inadvertently replicate biases in training data. For instance, facial recognition technologies have historically underperformed on darker skin tones, raising fairness concerns. It is therefore important to audit AI outputs regularly for bias.

3. Regulatory compliance and accountability

  • Financial regulators, including the Securities and Exchange Commission (SEC), Financial Crimes Enforcement Network (FinCEN), and the EU’s AML authorities, emphasise the importance of explainability in AI systems. When compliance decisions affect users’ ability to access financial services, they must be explainable and auditable.
  • Human reviewers can provide a transparent layer of review and documentation, thereby reducing legal risks related to algorithmic discrimination or errors.

4. Handling adverse media and sanctions screening

  • AI tools use natural language processing (NLP) to scan news, watchlists, and social media for negative mentions. But context matters and, in some instances, human intervention is necessary to determine if, for example., the news article containing adverse media is speculative or factual.
speech marks

AI systems can provide vast amounts of data and insights, but human judgment is essential in interpreting and acting upon this information.

Regulatory outlook and the future of human-AI collaboration

Global regulators are becoming increasingly aware of AI’s role in financial compliance and are developing guidelines accordingly.

1. The EU AI Act classifies AI used in financial services as ‘high-risk’, requiring transparency, human oversight, and bias mitigation. This classification mandates strict requirements, including:

  • technical documentation, recordkeeping, and transparency
  • human oversight: mandatory human oversight is required to monitor AI systems and intervene when necessary
  • bias mitigation: there is an emphasis on data quality and measures to mitigate bias in AI systems.

2. The Financial Action Task Force’s (FATF) updated guidelines for Virtual Asset Service Providers (VASPs) emphasise the need for identity verification, record-keeping, and suspicious activity reporting. FATF continues to update its guidelines to address emerging risks in virtual assets such as:

  • customer due diligence (CDD): VASPs must implement the same preventive measures as financial institutions, including CDD
  • record-keeping: there is an obligation to maintain records of transactions and customer information
  • suspicious activity reporting: there is a requirement to report suspicious transactions to relevant authorities.

3. The US SEC and FinCEN highlighted challenges related to the transparency, explainability, and accountability of using AI in decision-making processes within financial services.

These regulatory trends underscore the importance of ‘human in the loop’ models. As rules evolve, platforms that maintain human oversight will be better positioned to adapt quickly and avoid compliance missteps.

speech marks

Platforms that maintain human oversight will be better positioned to adapt quickly and avoid compliance missteps.

Best practices for crypto platforms

To leverage AI in eKYC while keeping humans in the loop effectively, crypto platforms should:

  • implement layered review protocols: allow AI to handle low-risk cases, while escalating higher-risk ones to trained analysts
  • continuously audit models: monitor for accuracy, fairness, and compliance regularly
  • document decisions: maintain clear records of both AI and human actions for auditability
  • invest in training: ensure compliance teams understand AI outputs and limitations
  • design ethical AI systems: involve compliance, legal, and data privacy experts in system design.

Fostering a synergy

The integration of AI in eKYC processes for crypto compliance represents a leap forward in efficiency and accuracy. AI is undeniably transforming eKYC in the cryptocurrency industry, offering unprecedented efficiency and scale. However, the human element remains indispensable in ensuring ethical standards, interpreting complex cases and building trust. By fostering a synergy between AI and human oversight, crypto platforms can navigate the intricate compliance landscape effectively, safeguarding both their operations and their customers.

AI offers powerful tools for KYC and compliance in the crypto sector, but it is the combined efforts of technology and human expertise that will pave the way for a secure and trustworthy financial ecosystem. The future of crypto compliance lies in this harmonious blend, ensuring that innovation and integrity go hand in hand. As crypto platforms strive for global legitimacy and regulatory compliance, the fusion of AI capabilities with human judgment will define the next generation of trustworthy, scalable and resilient financial systems.

About the authors

Ruan Botha & Jaco Janse van Rensburg

Ruan Botha is Co-Founder & Co-Chief Operations Officer at Provenance. Jaco Janse van Rensberg is IT Manager at Provenance. https://provenancecompliance.com/