Untangling the AI knot

Image related to Untangling the AI knot

This article is a free excerpt from inCOMPLIANCE, ICA's bi-monthly, member exclusive magazine. To gain access to more articles like this, sign in to the Learning Hub or become a member of ICA.

Compliance professionals possess all the tools and knowledge to lead on AI risk and governance, says Neil Jennings.

The ancient Greeks had a knack for making stories resonate. Take the tale of the Gordian knot. In the story, Alexander the Great cleaves a seemingly unbreakable knot in half, instead of trying, like everyone before him, to untangle it. The lesson is unambiguous: bold, decisive action is sometimes necessary. Shortly afterwards, Alexander swept across much of Asia as conqueror.

The current state of AI governance, regulation, technical development and geopolitical jostling would perhaps benefit from the lesson of this Grecian yarn. However, for legal, risk, and compliance professionals in relation to AI, it prompts a critical question: do we ‘cut the knot’ to conquer the digital world with swift, perhaps even reckless, action? Or do we patiently work together to untangle the complexities of AI risk management to truly benefit innovation, business and consumers?

speech marks

Among technical progress, regulation and political friction, we see within every jurisdiction chaos at the macro level.

Global regulatory landscape

Among technical progress, regulation and political friction, we see within every jurisdiction chaos at the macro level. Governments grapple with uncertainty; they don’t know precisely what they want, nor what other governments intend to do. They simultaneously desire safety and innovation; many are concerned about over-regulation. The result? A highly complex regulatory dynamic.

Europe leads the way with the EU AI Act, a risk-based regulation, placing different obligations on different operators of AI systems and general-purpose AI models. While it is being phased in over a number of years, some major provisions are already active – the ban on ‘prohibited’ AI systems and the requirement for providers and deployers to ensure sufficient AI literacy are two examples. There has been conflicting information about potential disruption to enforcement, but in July, a Commission spokesperson said that we should expect no pause. Despite delays, in July 2025 a final version of the voluntary Codes of Practice for General Purpose AI models was published.

Beyond Europe, an uneven and disjointed AI regulatory landscape prevails.

  • UK: the AI (Regulation) Bill stalled, pivoting toward an innovation-first strategy, supported by the government’s January 2025 AI Opportunities Action Plan, promoting sector-specific guidance. Formal regulation seems delayed until mid-2026 to consider issues like ‘safety’ and ‘copyright’. In June 2025, the Data (Use & Access) Act became law, amending UK General Data Protection Regulation (GDPR), with non-consensual ‘deepfake’ provisions being accepted.
  • US: the ‘Safe, Secure, and Trustworthy AI’ Executive Order was revoked in January 2025. In June 2025, the House of Representatives passed the ‘One Big Beautiful Bill’, part of which included a 10-year moratorium on state-level AI regulation enforcement. However, in July, the Senate voted (99-1) to strike the 10-year moratorium before the final act was passed. At state level, many AI-adjacent laws exist, such as biometric laws (e.g. in Illinois and Washington) and those around automated decision making, like the California Consumer Privacy Act (CCPA). Several states have recently enacted AI-specific laws, like Colorado’s AI Act, the Responsible AI Governance Act (Texas) and the Responsible AI Safety and Education Act (New York).
  • Canada: the AI & Data Act dropped off the slate and looks unlikely to return. Canada’s inaugural AI minister noted in June 2025 that Canada will not ‘over-index’ AI, but confirmed privacy is critical to regulation and that intellectual property is a high priority. Currently, AI governance is pieced together through privacy legislation, like the Personal Information Protection and Electronic Documents Act (PIPEDA) and Loi 25, and the government’s 2023 Voluntary Code of Practice.
  • Australia’s federal and state governments have produced some valuable resources (like the New South Wales ‘mandatory’ Ethical AI Principles), but there is no alignment on national regulation. Treasurer Chalmers recently stated that there is ‘overwhelming focus on capabilities and opportunities, not just guardrails.’

Asia and the Middle East

  • China is continuing its rapid and distinct AI advancement, prioritising national strategy over global alignment, with a children’ s AI education pilot scheme set to start in September 2025. It is also filing significantly more AI patent applications than any other country, and appears to be focusing on specific international AI collaboration.
  • South Korea’s Basic Act on AI focuses on safety, transparency, innovation and risk management. In June 2025, the Special Law for the Promotion of AI Industry bill was introduced, aiming to enhance AI industry competitiveness and R&D.
  • Japan’s AI Promotion Act is active, with a government-backed AI risk and development task force expected by end of 2025.
  • The United Arab Emirates stands out for its clarity and speed, building dedicated AI ministries, issuing national strategies, and even offering Chat-GPT Pro free to citizens.
  • Malaysia and Vietnam could follow in the EU’s footsteps with EU AI Act-style legislation. Vietnam’s Law on Digital Technology Industry comes into effect in January 2026, with concepts like human oversight and high-risk AI categories.

Global collaborative efforts

Some global initiatives paradoxically signal the importance of AI regulation and risk management, but demonstrate a lack of concrete decision-making at government level.

  • The Council of Europe Framework Convention only becomes binding when it is ratified by at least 5 of the 16 signatories, 3 of which must be EU member states. To be clear, the EU is counted as one signatory, and nobody has ratified it at the time of writing!
  • The Organisation for Economic Co-operation and Development (OECD)’s AI Principles were formally adopted by the G20 in 2019, although Russia, China and India did not adopt them individually. These principles form the backbone of many AI risk and governance frameworks.
  • In 2023, the G7 countries signed the Hiroshima Framework, setting out broad AI risk management principles, like monitoring, transparency and accountability, and has 50+ official supporters. However, the 2025 G7 Summit in Canada painted a slightly different picture, with the group’s draft statement entitled ‘AI for Prosperity’.
  • The BRICS countries met in Brazil in July 2025, publishing the Leaders’ Statement on the Global Governance of AI.
  • The US, UK, Australian, New Zealand and South Korean government cyber departments published a Joint Cybersecurity Information Document, focusing on data security governance.
  • Various international organisations are creating highly relevant risk management and governance frameworks, like The National Institute of Standards and Technology (NIST)’s AI Risk Management Framework, ISO/IEC 42001 & 42005, and IEEE’s 7000 series of documents.

The result? It is very easy to get lost in all the noise.

Implications for compliance and governance

The above paints a picture of global regulatory fragmentation. There is uncertainty and inconsistency, especially for businesses operating internationally. Compliance teams (and those involved in AI governance and risk management) must navigate the resulting operational issues.

  • Aligning on acceptable risk: this requires input both from high-level decision-makers and subject matter experts, which inherently takes time and careful coordination.
  • Operationalising policy: what do policies say in practice? Is effective training provided to relevant staff? How well do you screen third-party AI vendors?
  • Allocating resources: are you partway into a multi-year strategic plan? Is AI part of it? If not, are you able to take meaningful, funded steps to address it?
  • Managing stakeholders: how do you navigate competing expectations from customers, investors, and internal teams? What’s the right pace of growth? At what risk tolerance?
  • Constant reactive state: compliance departments understand firefighting emerging issues. Most would rather lead proactive responsible AI projects from the outset.

Yes, there is chaos at a global and political level. There is jurisdictional complexity and uncertainty. There are rapidly changing tools and capabilities. Regulators and consumers have expectations.

Yet a major part of responsible AI requires that initiative is taken at a corporate level, grappling with concepts like risk tolerance, AI use cases, analysing threats vs opportunities, and then establishing an AI risk framework specific to your business and stakeholder needs.

The compliance department can be a strategic partner. The starting point is tracing the common themes for ‘good’ AI governance. Themes that consumers and other stakeholders will not only understand, but demand: transparency, fairness, accountability, literacy, trust and security. Luckily, these are concepts that compliance departments know well. Adopting a principles-based approach to AI governance frameworks maintains the critical layer of objectivity.

speech marks

A major part of responsible AI requires that initiative is taken at a corporate level.

The ‘baseline AI governance’ starter kit

Proactive governance can start with compliance teams, which are in a perfect position to work with key departments, and design and oversee the tools to assess internal AI use. Simple, low-friction actions to drive risk-aware decision making in relation to AI.

The checklist below is a great starting point for understanding your current AI risk and governance capabilities, maturity level and gaps to address:

Awareness and monitoring

  • Awareness of current AI tools in use?
  • Ongoing monitoring of AI system outputs?
  • Is AI behaviour within expectations?

Core risk domains

  • Fairness and bias
  • Privacy and confidentiality
  • Accuracy
  • Security
  • Functionality

Response and escalation

  • Are reporting and compliance mechanisms in place?
  • Are roles and responsibilities understood?
  • Are there internal escalation processes?

AI impact assessments

  • Establish assessment threshold
  • Clarify due diligence purpose
  • Review all risk categories across supply chain

Recordkeeping and auditability

  • Is there a robust recordkeeping process?
  • Is sufficient detail captured in logs?
  • Are there audits at certain intervals?

Containment and mitigation

  • Are there clear steps to contain incidents?
  • Are they understood by all staff?
  • What mitigating factors are in place?

Key post-incident questions

  • Are any action plans required?
  • Have relevant controls been updated?
  • Do any gaps remain?
  • Are any notifications required to customers, partners, or regulators?

In addition to the above self-assessment checklist, a simple AI risk and governance oversight structure can include:

  • AI use policy – short and simple, covering common generative AI tools (Gemini, ChatGPT, Claude) that your staff use. This should serve as a ‘common sense reminder’.
  • Basic AI literacy – relevant to roles, with quick reference guides and other short, helpful information documents.
  • AI tool mapping – start small, get a feel for what you use and how, and what decisions tools influence (this will help you understand operator roles under the EU AI Act).
  • Third-party AI vendor screening – critical to ensure you understand the relevant risks, like privacy, IP and supply chain (your privacy programme should partly address this already, so no excuses!).

Untangling the knot

This is not a complete AI risk and governance framework. However, simple models can provide solid foundations to build upon, regardless of whatever regulation comes next. We don’t need to swing a sword at the AI governance knot. Instead, we need to start untangling it, slowly and steadily. We can build adaptable frameworks based on the risk tolerance we already work within, and principles we can explain clearly.

Compliance professionals already have the tools to lead the effort: an understanding of risk, the ability to clarify complex issues and experience in applying principles-based thinking. We may never fully untangle the AI knot, but we can move forward with confidence, promoting risk-aware practices in the knowledge that we don’t need to wait for perfection before taking action.

About the author

Neil Jennings

Neil Jennings is a solicitor, consultant and compliance director in the tech sector. He advises businesses on emerging challenges in AI governance, privacy compliance and risk management. He is a Fellow of the ICA.