Insight

Are companies ready to trust artificial intelligence?

Written by Holly Thomas-Wrightson on Monday August 2, 2021


A recent roundtable, looking at Managing Resources while Managing Regulatory Change sponsored by Wolters Kluwer and organised by ICA, looked at the importance of balancing machine learning (ML) and artificial intelligence (AI) with human intelligence and intervention.

Jonathan Bowdler, global lead for the postgraduate programmes at ICA, led the discussion. He started by talking generally about the idea of AI, and how it is – or can – be used in businesses.

In a poll, the question was posed to the delegates of AI use in compliance management in their companies. 44% voted that they felt their companies under-utilise AI, with another 37% saying that AI wasn’t used (to their knowledge). Only 15% felt that there was the right combination of human and artificial intelligence, and the remaining 4% felt AI was over-utilised.

This poll led into the main discussion topic: Is there a ‘right’ balance between AI and human intelligence within an organisation? And if so, what is it, and how and why might it change?

There was a wide variety of answers to this. Importantly it became quickly clear that there was uncertainty about the definition of AI itself. The concept often gets muddied by other adjacent (but often seen as interchangeable) phrases, such as automation and machine learning. This is especially common as, for many companies, these are more commonly used than AI itself. The definition Jonathan Bowlder gave of actual AI was ‘using computers to mimic the human brain’, which for the most part seemed to be outside of the participants’ direct experience.   

A lot of participants reported that their experience was of more simple forms: technology that could be given set parameters to apply to data, often with binary ‘pass’ or ‘fail’ criteria, to sift through usually huge quantities of data, often used to pull up anything that indicated money laundering risk or people who may be politically exposed persons.

One person mentioned that their company had a more sophisticated system using voice analytics, which would monitor an ongoing call and raise a prompt to the employee if it recognised that they had not mentioned something that was needed to remain compliant. This, they reported, was a useful prevention tool, as opposed to reactive, as instead of the recorded call being checked after and feedback being given after a breach had already occurred, the technology would catch and raise the issue in call and the staff member could avoid the risk.

A lot of attendees reported, at least with their experiences of it at its current level, an ongoing need for the results from any AI or machine learning processes to be reviewed manually by a human, whether to check for mistakes or to make the final decisions. The general thoughts were that it should be used to sort through big data and then ‘advise’ and give recommendations to its human users to check, instead of giving the computer systems complete power to approve or reject something.

This illustrated a consistent feeling, summarised Jonathan Bowlder and seconded by the delegates, which was a general hesitancy to trust and –in particular – to give over control to these digital solutions.

One delegate mentioned hearing of a company that is constantly having to hire more people to process huge data sets on their systems, as well as dealing with a high turnover rate as those staff left for more engaging work, and how this is unsustainable in the long run.

This, they said, was where many companies would, if they were to educate their staff to get past these concerns, see a major benefit of installing a trusted, reliable AI system to replace mundane, repetitive tasks, to allow staff to focus more on value-add activities. On the other side of that, of course, are the worries of many in the industry about the stability of their working future, not to mention the many other jobs that may fall under the AI-replaceable bracket as the technology improves. Again, this is an important area where education and transparency is integral as a way to show commitment to staff members and diffuse their concerns.

One of the consistent themes was that many companies are still far away from introducing technology that closely mimics the human intelligence that AI was defined as earlier in the session. That is, the current technology that the delegates have used is restricted to these more simple form of set parameters, tick boxes or key words being applied to data sets, and churning out any entries that do not meet set expectations or criteria.

Even then, there were issues raised with how well that approach works; for instance, the issue of the quality of the company’s data being provided to the system in the first place. If the data is out of date, in an incompatible format or missing some of the information needed by those parameters, the technology cannot work as effectively and false results are far more likely.

It became clear in conversation that business models need to change. While companies may look to AI as a way to cut time-consuming activities, expensive, snazzy new systems applied to bad data, or given to untrained and demoralised staff to operate are not going to lead to as many successes as they might have been dreaming about.

Likewise, there is the issue of biases being built into machine learning and AI systems. If sample data used in training algorithms for a system are flawed or feature unconscious bias, the information that comes back out is going reflect those problems. A prime example of this is the research recently undertaken on an AI system being used in job interviews.[1]

It was intended to filter out human biases and judge candidates on quantifiable strengths and weaknesses such as openness and agreeableness. The results showed that scores could be noticeably swayed by a bookcase in the background, wearing glasses, or the video quality of the interview recording.

These accidental biases and issues can, however, be mitigated by thorough test and review processes being put in place to identify any of these biases and counteract them. Biases can be further mitigated by ensuring diversity in the people that are designing the algorithms, and that their knowledge goes beyond just the working of the technology to encompass the content it is being designed for.

The approach of regulators was also discussed, and the topic of looking to them for guidance with AI-related technologies. In the UK, for instance, the Financial Conduct Authority is taking a more progressive approach than many other regulators, stating that it wants ‘consumers to benefit from digital innovation, and competition. This includes data-based and algorithmic innovation.’[2]

However, as it was pointed out, not every jurisdiction or industry’s regulator is going to have guidance in place, or as advanced a view, and for those areas there may be a need to carefully lead the way.

In closing the meeting, Jonathan summated a lot of the discussion topics that came up: that people felt that, as it is, there is an underutilisation of actual AI as opposed to the greyer area of ML; there is excitement about the idea of using AI and the possibilities that it offers a reduction of time consuming manual tasks and more time for value add activities; that there are still many that feel that they don’t completely understand and/or trust it enough yet to give over control in making final decisions; and that this is going to be a major hurdle to overcome if we are going to have a healthy balance of AI and human intelligence in the future.

 

[1] J. Fergus, ‘A bookshelf in your job screening video makes you more hirable to AI’, Input, 18 February 2021: https://www.inputmag.com/culture/a-bookshelf-in-your-job-screening-video-makes-you-more-hirable-to-ai – accessed July 2021

[2] FCA, ‘Data analytics and artificial intelligence (AI)’, 27 October 2020: https://www.fca.org.uk/firms/data-analytics-artificial-intelligence-ai – accessed July 2021


ICA Essentials – Financial Crime Risk and New Technology

 

ICA Specialist Certificate in Financial Crime Risk and New Technology

 
You may also like to read:

 


Comments:

Please leave a comment

You can leave the name empty should you wish to remain Anonymous.

You are replying to post:

Name

Country

Email *

Comment *




Search posts

View posts by Author