Artificial or human intelligence – A compliance challenge

Image related to Artificial or human intelligence – A compliance challenge

This article is a free excerpt from inCOMPLIANCE, ICA's bi-monthly, member exclusive magazine. To gain access to more articles like this, sign in to the Learning Hub or become a member of ICA.

Does AI threaten to undermine our ability to think critically? Tim Tyler discusses.

speech marks

We live in a society exquisitely dependent on science and technology, in which hardly anyone knows anything about science and technology.

Carl Sagan wrote this at a time when artificial intelligence (AI) wasn't so much as a twinkle in the eye, but it captures the challenge we currently face as AI explodes into almost every area of our lives.

Another great thinker and visionary, Yval Harari, reflected that we must control AI before it controls us. Whether you wholly agree with this view or not, it once again highlights how important it is that humans understand the technology and have the ability to think critically, to challenge and to take ownership of thoughts and decisions. It is these skills this article will speak about, how they can atrophy if not used, and the steps we can take to manage the risk.

Before turning to the skills themselves, we need to consider how AI is evolving to reduce, even negate the need for humans to use (and therefore to develop) these abilities.

Figure 1: A growth in AI in parallel with a reduction in human skills?

Figure 1 suggests the growth in AI in parallel with reduction in human skills.

AI – The trajectory

The launch of ChatGPT in late 2022 was sensational. 100 million users were achieved within two months, a faster uptake than for any other product in history. Moreover, while this was the best known modern AI product, it was by no means the only form available or in development.

The technology has since been deployed across banking, retail and e-commerce, manufacturing, transportation, logistics, healthcare, marketing, agriculture, travel, and entertainment.

This growth looks set to continue, even accelerate as we move from what has been described as 'Narrow AI' solutions characterised by domain specific capability – such as ChatGPT – to 'General AI' that will be able to operate across different domains without human intervention. Other anticipated AI developments include collective AI or swarm intelligence, quantum AI, neuro-symbolic AI and embodied AI.

This growth is driven by software innovation, hardware advancement, near exponential scaling, and hyper levels of investment at organisation and state levels. 

Human skills – For the gaps?

Activities that were once undertaken exclusively by humans, such as health diagnostics, driving and strategic planning, are now being routinely tasked to a variety of AI applications. This raises the question: what will be left for human engagement in the future? Will we need, for instance, to redefine work and our role as human agents?

Many reports have been published that seek to address the question of human skills in an AI age. These typically focus on, for instance, the need for emotional intelligence, creativity, intuition, leadership and vision. The message seems to be: "Don't worry about our future as humans in the workplace, there are many activities, functions that cannot be undertaken by AI, thereby preserving the role of human beings in the value chain".

It is possible that the mindset that sits behind this kind of reassurance is unhelpful, even dangerous. If we allow human value in the workplace to be defined by what AI cannot do we become hostage to AI not developing further than it has already, or to how we currently envisage it developing. But what if AI is able to reach higher forms of (for instance) emotional intelligence, creativity and intuition? This mindset will squeeze the human role into an ever diminishing and marginalised space (see Figures 2 to 4).

speech marks

If we allow human value in the workplace to be defined by what AI cannot do we become hostage to AI not developing further than it has already

Figure 2: An aspirational position in which human and AI skills are complimentary and balanced

Figure 2 captures an aspirational position in which human and AI skills are complimentary and balanced.

Figure 3: As human ability shrivels and that of AI expands this equilibrium may be lost so that the human role is marginalised; leading to (Figure 4) a dystopian outcome in which human skills are swallowed up within AI, at least theoretically

Figure 3 - as human ability shrivels and that of AI expands this equilibrium may be lost so that the human role is marginalised.
Figure 4 - a dystopian outcome in which human skills are swallowed up within AI, at least theoretically.

This is part of the reason we need a clear philosophy of AI and, more immediately, clarity around how humans and AI interact that hardwires human agency, control and primacy. More than this, it illustrates how vital it is for humans to develop key skills such as critical thinking and AI literacy to stay relevant in the conversation.

Cognitive atrophy

What impact will AI have on human skills? This article can do little more than scratch the surface on this urgent, but challenging question. Current indicators suggest that, rather than enhancing key skills like critical thinking, the presence of AI in the workplace will lead to reduced capability in the short, medium and long, even generational terms.

Research undertaken by KPMG into the use of, and attitudes towards, AI (in Australia) has revealed several notable findings.

  • While only 24% of respondents had received formal training in the use of AI, some 48% felt they could use AI tools effectively.
  • 57% had relied on AI output at work without evaluating its accuracy.
  • 51% had presented AI generated content as their own.
  • 42% had relied on AI to do a task rather than learning how to.

The picture the report paints is one of complacency and a willingness to outsource thinking to AI. This shouldn't be surprising. The psychologist and compliance thought leader, Paul Eccleson, describes how hard thinking uses energy, so our brains try to limit the amount of this that we need to do by developing routines, habits, and assumptions that fast-track to outputs while minimising the need for effort.

When we are presented with a task in the workplace we have choices to make. Let's take the example of a customer complaint. We can, carefully, evaluate the information provided, if necessary further researching the context, relevant organisation policy and previous experience of the issues. In this way we develop a holistic understanding of the problem, take appropriate action, and, for instance, craft a series of carefully worded messages.

Or, in a busy working environment, we might simply put the details into ChatGPT or a similar platform and ask what steps should be taken and for a draft of the relevant messages to be used. In this way we don't apply our minds to the issue and its resolution but allow AI to do the heavy lifting. We may have successfully outsourced the thinking, but what will be the implications for our ability to do the task in the future?

The brain, much like a muscle, needs regular exercise to stay sharp. When it’s not engaged, it starts to lose flexibility and efficiency. We don't develop our ability to think critically by keeping it in a box, well away from the demands and rigours of the workplace, but by exposing it to challenge, the risk of failure, and, when things do go wrong, the valuable learning this gives rise to.

speech marks

Current indicators suggest that, rather than enhancing key skills like critical thinking, the presence of AI in the workplace will lead to reduced capability in the short, medium and long, even generational terms.

At a generational level, the use of AI to insulate us from difficult decisions may lead to a cadre of individuals whose minds haven't been fully developed in the crucible of the workplace, and with this, a loss of creativity, critical thinking and wisdom.

At a time, then, when human beings need to step up to respond to the extraordinary power and reach of this encroaching technology we may find ourselves going backwards, ceding still more of the decision making, and with this, the power, to AI. 

AI deployment – Managing the cognitive risks

AI is here to stay. We have little choice but to embrace the efficiency, value and insight it offers. But can we do this responsibly, with clear-eyed understanding of the cognitive risks involved? Four measures are suggested.

The fundamental principle of human in the loop usefully establishes the primacy of human engagement with AI, including the development, training, and oversight of the systems together with final decision making (based, for instance, on AI recommendations). Preserving this principle will help buffer against the loss of cognitive skills. There is, however, a risk that this is eroded over time on the altar of efficiency and speed if it isn't hardwired into the systems with clarity around what, for instance, ‘final decision making’ actually means. This links to the principle of transparency, another vital requirement that is in danger of being marginalised for similar reasons.

Organisation values define the behaviours that are prized, and over time mainstreamed, across businesses. Establishing personal accountability as fundamental mitigates against routine and carelessly outsourcing decision making to AI solutions. There will, of course, need to be clarity around what this means in practice, but now is a good time to reinforce the principle that we are, as individuals, accountable for our decisions even if AI helps us to reach them.

To manage AI systems whilst maintaining our cognitive abilities, we need to understand what the systems are doing. The importance of skills like AI literacy are touched on below, but assuming for the moment a good appreciation of the technology, there is also a key need to understand deeply the activity itself, whether this is a manufacturing process, customer engagement or compliance imperative. This depth of understanding is vital in establishing the AI driven solutions. It is reasonable to ask, perhaps even establish a principle, that an expert human can replicate the activity when called upon – albeit at a simplified, individual or local level rather than at scale. If this cannot be achieved, we have arguably lost control of the technology and will struggle to hold it to account.

Returning to Carl Sagan's quote that opened this article, we have opportunity to develop a deeper and relevant understanding of the technology. AI literacy will better enable us to conceive, plan, build, train, test, oversee and hold AI solutions to account. It is urgently needed as we are already deploying solutions across our businesses.

AI deployment – A compliance responsibility

The development and deployment of AI solutions need care then, recognising the various layers of risk that may not always be obvious. Research published by Gan Integrity sheds lights on how this deployment is being handled currently. It concludes that IT leads in the deployment of AI in most organisations. Perhaps not surprisingly, the level of governance maturity is weak, lagging behind AI adoption rates with under-investment in this area.

Organisations’ compliance capabilities have a vital role to play. We run towards risk, embracing and addressing the issues through well-established governance and risk frameworks, and the ability to adapt to new challenges. How, then, do we respond to this new challenge? What does effective AI governance look like? Various models are emerging and, with them, a need to determine what works within our own organisation. Microsoft Azure's approach to AI deployment identifies seven imperatives.

  1. Clear definitions and ethical principles.
  2. Assigned accountability.
  3. Defined risk appetite.
  4. Risk-based assessment processes.
  5. Enhanced data governance.
  6. Proportional controls for AI systems.
  7. Robust procurement practices.

This is useful, but the medium and longer term impact of the use of AI on employee cognitive skills is at best inferred. Perhaps there is opportunity to call it out. In the meantime, in an echo to Yval Harari's description of AI mentioned in the first paragraph above, Marc Rotenberg, Director at the Centre for AI & Digital Policy puts it starkly: ‘Either we will govern AI, or AI will govern us’.

Research for this article included the use of AI.
AI was not used at any stage in the design and drafting of the article.

About the author

Tim Tyler

Tim Tyler is Vice President of the International Compliance Association.