Discover more about our courses.
ICA is the trusted partner for you and your organisation.
Written by James Thomas on Monday February 22, 2021
What challenges and opportunities does artificial intelligence present and how can risk and compliance practitioners best respond to these? James Thomas reports on recent ICA Roundtable events.
Artificial intelligence (AI) promises to deliver extraordinary opportunities, but also introduces significant risks that compliance practitioners must understand, anticipate and respond to. These risks are real and rapidly evolving, yet the rules, laws and codes governing AI are not always able to keep up with the pace of technological change, creating a considerable challenge for compliance.
ICA recently held a series of online Roundtable events, sponsored by Wolters Kluwer, aimed at exploring the practical challenges of AI, from the perspective of Risk and Compliance professionals, and providing an opportunity for ICA members and partners to share their current concerns and experiences (both positive and negative) of AI.
The Roundtables were attended by senior individuals working within the financial services sector across the globe, including Heads of Compliance, MLROs, Heads of Monitoring, CROs, and Conduct Risk Managers. The discussion was led by Janet Adams, a disruptive tech advocate and long-term AI enthusiast with over two decades of experience in banking risk and technology, who was able to share with the participants some of her academic research from a recently-completed Master’s degree in AI.
Following a brief overview of AI technologies – discussing the distinctions between machine learning, deep learning and natural language processing, and highlighting both their transformative potential and the key challenges that they pose – Roundtable groups were asked to identify their main concerns with regard to these technologies. Although the participants were at a variety of stages in their AI ‘journey’, some common themes emerged, with “Decision Making and Governance” the most commonly-cited concern, followed by “Monitoring RegTech and AI”, “Accountability and Explainability”, and “Training”.
These topics, and their overlaps, formed the focus of the remaining discussion.
Given the complexity of AI technologies and the speed at which they have developed, participants reported that levels of understanding regarding how AI technologies work, and of the outputs of these technologies, varied considerably both across their industries and organisations, as well as within compliance teams. This resulted in practical hurdles when selecting and implementing suitable technological solutions as well as when establishing AI-related training needs. As one participant put it: “Part of the problem is knowing what to train in. AI is being used in diverse ways in different parts of the industry, and there is considerable potential for the business to run away and Compliance to struggle to keep up. It’s hard to know where to focus your energies.”
Further, diverse workforces (both in terms of job functions, specialisms, age and experience) have a corresponding diversity of training needs with regard to AI. For example, according to one participant: “There’s something of a divide between those that ‘get’ AI and those that don’t. And often that divide is between the younger and the older employees.” Given that AI has the features of a general-purpose technology – i.e. it is expected to impact all aspects of the world around us – it was agreed that adopting a ‘head in the sand’ approach to AI training is simply not an option. All business functions will need at least some understanding of AI and what it means to the organisation and their role within it.
As well as shortfalls in knowledge and understanding of AI within the business, many also highlighted the need for much greater understanding of risk and compliance issues amongst technology providers. As Janet Adams suggested: “There is a huge communication piece here. The data scientists and tech specialists generally don’t have strong risk awareness. Training needs to work both ways. We as Risk and Compliance need to understand the tech, but the tech specialists need to understand the risk and compliance considerations too.” In addition, it was suggested that risk and compliance teams may, increasingly, need to contain a risk and compliance data scientist as a matter of course.
Concerns regarding shortcomings in knowledge and understanding of AI and, more broadly, of the challenges of vendor selection and engagement, also featured strongly in the discussion around the accountability for, and explainability of, AI-derived decisions. How can you select the right solution for your organisation if you don’t understand how AI solutions work? Further, how can you be accountable for the results of an AI solution if you can’t explain how it operates?
Participants agreed on the importance of selecting the right technology partner when implementing AI, with an emphasis on the word ‘partner’. One individual highlighted “the difficulty of trying to select the right provider for an AML solution given that there are over a hundred providers in the market”. Another suggested that vendors must get better at explaining how their products work, particularly as solutions are now running ahead of the understanding of both regulators, businesses and often vendors themselves. “Solutions can feel like a ‘black box’,” they suggested. “I have had discussions with vendors where they weren’t keen to explain how the underlying technology works, but if you can’t understand how the technology works, how can you implement it?”
Janet Adams reported such experiences are not uncommon. “Nine times out of 10, people reaching out to me trying to sell me products don’t seem to understand the technology themselves to any great depth,” she warned. “I propose that a risk-based approach to explainability should be adopted, with the degree of risk depending upon the use-case. If you’re dealing with a high-risk use case that directly impacts a customer you would need extremely high confidence in the explainability. You therefore need a case-level understanding of the technology in order to select a technology partner and understand the level of associated risk to be managed”.
Elsewhere, participants were wary of off-the-shelf solutions, again due to the lack of clarity regarding how such products work and whether they are applicable to the specific circumstances of the organisation. “Many AI tools in the market come with predefined, preset rules, and you may not be able to fully understand the underlying rationale and logic of the results that they produce,” suggested one individual. “We took care to select a vendor that could clearly explain the logic of how the AML system was working.”
Given the limitations of both compliance practitioners’ technological know-how and of tech providers’ grasp of risk and compliance, finding a common language is essential. As one participant suggested that “For those who aren’t tech-savvy, there is a halfway house conversation that can be had around the logic of decision making. I understand the logic of decision making, so the vendor should be able to explain to me the logic of the technology’s decision making process.”
These concerns about explainability and accountability were underlined by a broader unease regarding potential loss of control, associated in particular with the use of deep learning algorithms for decision making. As Janet Adams explained: “Instead of teaching the computer programme what the correct outcome is, as in traditional computational methods that use ‘If … Then’ rules, with AI we feed the algorithm a lot of data and let it work out the ‘Then’ by itself. Therein lies the heart of the issue, because at that point we are losing control. We are allowing the computer to derive its actions from previous data.”
Some participants were particularly uncomfortable about the possibility that algorithms could amplify bias, for example in a client onboarding context. However, as Janet Adams pointed out, bias is more a product of poor governance and historic data than of the technology itself. “In most cases, algorithms highlight bias rather than amplify it,” she explained. “If there is bias within the organisation and the organisation’s pre-existing data is used to train the algorithm, then the algorithm will reproduce it and it can be identified through testing. Therefore, in practice, adopting AI presents a great opportunity for a step change in the eradication of bias. This points to the need for compliance by design at the outset of an implementation project: you choose the most appropriate algorithm and then apply rigorous governance in the design of any feature of the system. Bias needs to be considered at the very outset and then continuously monitored and audited throughout the system lifecycle.”
The message is that good governance must underpin all aspects of an AI project, from procurement to implementation and monitoring. This should include:
Auditable risk assessment and cost/benefit analysis taking into consideration key principles of AI design
Monitoring and testing of AI outcomes and data input against key risks and principles
Appropriate oversight of AI systems and clear accountability of senior management with sufficient understanding of AI.
Data governance, in particular, was flagged as an often-underestimated component of any AI project, not least because the output of an algorithm will only ever be as good as the input data. According to Janet Adams: “An AI project in financial services is likely to be 10% AI; 40% data gathering, cleansing, normalising, checking, and de-biasing; and 50% compliance and governance. It’s easy to knock together an algorithm, but getting all your data together and in good shape can be a challenge, and then testing, monitoring and governing it safely throughout its journey could be as big an effort as the other two elements, depending on the risk level of the use case.”
Above all, the takeaway message for participants was not to be daunted by the pace of change in this field and, equally, not to be fearful of asking questions of their organisations and product providers.
“There is a reluctance among banking professionals to say ‘what is AI?’,” suggested Janet Adams, “But my single biggest tip is: put your hand up and say when you don’t understand something. Keep saying that you don’t understand until you, as a compliance professional, are happy that you have the answer that you need. AI is massively complex, so nobody looks stupid when asking questions about it. Now is the time to invest in yourself and really learn.”
The ICA Roundtable events were sponsored by:
Thank you. Your comment is awaiting moderation and should appear on the site shortly.
Required fields are not completed, please ensure all required fields (*) have been filled in properly.
You can leave the name empty should you wish to remain Anonymous.