This article is a free excerpt from inCOMPLIANCE, ICA's bi-monthly member exclusive publication. To gain access to more articles like this, sign in to the learning hub or become a member of ICA.
With AI developing at an exponential rate, it’s crucial that firms have appropriate governance frameworks in place, writes Neil Jennings
This is a practical article with actionable takeaways on AI governance. But allow me to briefly indulge in the philosophical which, when it comes to AI, I find inescapable.
If you didn’t watch the ICA’s recent broadcast (‘Compliance as an Enabler – What’s the Story? Part 2’) with Tim Tyler and Andrea Bonime-Blanc, the LinkedIn post promoting it contained perhaps the most appropriate quote by Eliezer Yudkowsky related to AI today: ‘By far the greatest danger of artificial intelligence is that people conclude too early that they understand it.’
Socrates would have approved of this message. His paradox is that awareness of our own ignorance is the key element of wisdom – this is where we seek truth, test our beliefs and, ultimately, learn. As physicist John Archibald Wheeler put it, ‘as our island of knowledge grows, so does the shore of our ignorance.’
AI is advancing rapidly, and its pace of development is quickening. It is therefore critical that we stop, think and consider both the threats and the opportunities, to balance innovation with fairness, and growth with safety.
Defining ‘AI governance’
AI governance is simply how we manage the risks associated with the use and development of AI at a corporate level. It involves the thoughtful consideration and practical implementation of controls to ensure alignment with established risk tolerances. It focuses on both strategy and operations (the why and the how) and is a true cross-departmental undertaking to handle threats and exploit opportunities.
Governance can be driven by legislation. For example, a business that must comply with the EU AI Act as the ‘Provider’ of an AI system. Governance can be formal, but not a legal requirement, for example, an organisation complying with one of the international standards, like NIST’s AI Risk Management Framework or ISO/IEC 42001. Governance can also be less formal, for example, a small- or medium-sized enterprise that wants to adopt AI in a risk-aware and responsible manner.
Why is all this significant? Well, the Bank of England recently reported that 46% of firms have only a ‘partial understanding’ of the AI technology they use. This is a gap that will only continue to grow unless organisations decide to adopt AI governance frameworks aligned with their objectives. AI isn’t new but it is increasingly accepted at a personal and professional level. Free tools like Gemini, Claude and Co-pilot create professional-level emails, presentations and memos. More sophisticated tools also exist with the potential to profoundly impact lives, like safety features in self-driving taxis, or architecture quality control tools.
The Bank of England recently reported that 46% of firms have only ‘partial understanding’ of the AI technology they use.
Why is AI so risky?
AI risks are related to strategic objectives, goals for using AI, regulatory landscape, and an organisation’s overall risk appetite. The pace of advancement, speed of adoption, and the fact that we are somewhat complacent about AI, is what makes it particularly risky.
Not every organisation faces the same AI risks, but risks certainly go far beyond professional embarrassment for being found out that your ‘expert memo’ was AI-generated. The following are some of the main risk categories.
- Intellectual property: battles over (i) who can obtain IP rights over generated AI output, and (ii) what happens when IP-protected content is used to train AI or as input for prompting AI, will continue to evolve.
- Privacy and confidentiality: how do you know your staff aren’t inputting confidential information? Is it appropriate to use personal data? Are expectations clear and simple? Are AI data breaches and unauthorised access dealt with somehow?
- Liability and legal compliance: does your AI create bias? Is it fair? Is it designed to provide safety, protection, or professional advice in some way? The AI you use and create must be fit for purpose.
- Data issues: did you get permission to use data for training? Do you even need to? Is the AI outputting accurate information that people or businesses rely on? What about annotation or data drift over time?
- Loss of opportunity: this is the ‘positive’ risk, because AI represents huge potential for growth and innovation. If your competitors are using AI, and if they are gaining an advantage, then not using AI is a risk. This is not a mandate to be reckless and ignore proper planning.
Compliance leaders
It’s important to recognise that the compliance department need a seat at the table. Compliance departments are in a prime position to build and implement robust AI governance frameworks. They are also a critical advisor in the overall decision to use or not use AI systems in relation to risk identification and assessment.
AI governance cannot be undertaken in isolation, or as a side-of-desk initiative, and compliance leaders can absolutely take the initiative to spearhead AI governance. This can be an effective strategy to encourage AI governance because the compliance skillset typically includes:
- understanding and explaining legal and regulatory requirements
- creating compliance processes
- managing complex training
- monitoring process effectiveness, and
- reporting to senior leaders.
Every compliance leader has a different starting point. If senior leaders are already considering AI governance, then it’s easier. If AI governance is not a strategic priority (maybe it’s seen as a technical initiative only, ‘just like cybersecurity’) then it requires influence. Likewise, if different departments and business units are not used to collaborative effort, then there will be more inertia before momentum gets going.
Practically, compliance departments can lead from the front by:
- educating themselves on the basics of AI and its governance
- sparking initial conversations with relevant people
- understanding the operational state of AI use and governance, and
- listening to senior leaders’ desire to use AI, focusing on the why (both to mitigate risk and to promote safe and effective adoption of AI).
AI governance frameworks do not need to be complicated and are based on (i) understanding what management wants to achieve, and (ii) connecting the right people to have the right conversations.
To be fully effective, AI governance frameworks require input and coordination across departments and at various levels.
Who needs to be involved?
To be fully effective, AI governance frameworks require input and coordination across departments and at various levels. A robust AI committee will acknowledge this in the following ways.
- Senior leaders must provide buy-in and continuously promote AI governance. In some cases, this will be part of the strategic plan. Words and actions must align to ensure compliance culture is not just a buzzword.
- IT and technical teams play critical roles, including network management, vendor screening, data analysis, operational infrastructure and information security.
- Talent and culture deals with people, and AI is more than just technical. You can have the best policy on the planet, but it’s worthless if your staff don’t follow it or know about it.
- Legal, risk and compliance departments take different forms. It is important to coordinate to ensure appropriate allocation of responsibility and use of expertise, including internal audit functions.
- All staff will use AI tools in some form, so each team member plays a vital role in the governance process by default.
What does good look like?
There is no off-the-shelf AI governance framework that you can purchase and install overnight. But there are some very clear guiding principles that should be in every organisation’s approach to governing AI.
- Mandate from management: this is the tone at the top, compliance culture, executive buy-in. Whatever you call it, AI governance cannot happen without a definite mandate, communicated clearly to the entire organisation, with resources and budget to match.
- AI literacy: after a green light from management, AI literacy is the single most critical part of AI governance. If your staff aren’t literate in AI, your AI governance framework is compromised, and likely in breach of the EU AI Act. Focus on literacy for specific roles instead of a one-size-fits-all approach.
- Staff capabilities and resourcing: roles and responsibilities must be clear. What is the end goal? Is that achievable now? Do you have the correct expertise, or do you need to do some resource planning?
- HR and IT change management: policies and processes will change and your teams must be aware. Think about enforcement, monitoring, reporting, and escalations, as well as how to merge with existing structures if there are major changes.
- Third parties: you will likely use third party AI systems or general-purpose AI (GPAI) models. Being mindful of AI within the context of technology agreement negotiation (data protection, liability, etc.) is imperative.
- Technical requirements: as well as the technical capability to run AI systems, you must monitor, test, and repair those systems where there is bias, inaccuracy, data drift, or any other type of issue that could cause harm.
- Holistic risk assessments: these should deal with both legal and ethical risks, and related controls and action plans. Trustworthy and responsible AI has real meaning and has come to be expected. As with all risk assessments, reputation, financial and operational considerations come into play too.
- Stakeholder engagement: good AI governance demands input from different stakeholders. If you are leading governance, how you engage with colleagues will form the basis for the entire governance and risk management ecosystem.
No matter the regulatory landscape or the geopolitical crisis of the day, one thing is for sure: trying to do AI governance without involving the right people and asking the right questions is a losing battle. Attaining a level of clarity on threats, opportunities and the ‘why behind the AI’ will help us navigate our shore of ignorance, and allow us to use AI in an advantageous and responsible way.