Decoding the EU AI Act: A framework for compliance?

Image related to Decoding the EU AI Act: A framework for compliance?

By Gary Duncan, 8 April 2024

The EU AI Act was approved by the European Parliament on 13 March 2024. The landmark legislation is the world’s first comprehensive legal framework for developing and using AI, and establishes strict rules to regulate AI and ensure that AI systems do not infringe on human rights.

In March, ICA hosted a webinar to discuss what the Act means for organisations in Europe and beyond. The webinar was hosted by ICA Event Producer Elliott Turner and featured Charles Kerrigan, a lawyer at CMS in London who specialises in AI, crypto and digital assets.

AI is here to stay

Turner began by stressing AI isn’t just a buzzword for the future – it’s already here, it’s here to stay and we need to start planning now for its safe and responsible integration.

For compliance and risk professionals, Turner suggested that AI could potentially have even more impact than the General Data Protection Regulation (GDPR).

Kerrigan said he has worked in AI since 2010, helping AI developers to build new systems and enable their safe adoption. It is a huge topic, he said, and particularly significant because AI is a general-purpose technology, meaning everyone will come across it in their working lives.

‘People need to understand how AI relates to their role in the organisation,’ he said, ‘and how the organisation, culturally and technically, will work with AI. It’s something for everyone.’

‘Scary good’

Kerrigan said that, although AI systems are not new – especially in financial markets, where they have been widely used in things like fraud detection and credit scoring – everything changed in November 2022 with the introduction of ChatGPT, which marked the beginning of generative AI as far as the non-technical community was concerned. ‘There was a famous quote from around that time from Elon Musk, who described it as “scary good”, and I think that was everyone’s experience of it,’ Kerrigan said.

AI systems are very good pattern-recognition tools, he added, and that is why they’re widely used in fraud, but regulators are only comfortable with them when there is human oversight.


Hefty fines for non-compliance

Kerrigan said the Act will impact every industry, so organisations need to familiarise themselves with the new legislation. And they need to start now, before the Act comes into effect in two years’ time.

In terms of non-compliance and litigation, Kerrigan said AI could follow a similar path to GDPR. ‘We anticipated that GDPR would give rise to a lot of litigation, but it didn’t,’ he said. Organisations understood how to comply with it and regulators provided good guidance. ‘I think we’ll have standards and practices that support the terms of the Act,’ he said, ‘and that will be something the compliance community will be part of.’

Organisations that fail to comply, however, could pay a hefty price, with fines of up to €35 million or 7% of their global revenue.

Kerrigan said the Act will encourage organisations to focus on three main areas: identification, risk assessment, and reporting and remediation.

Organisations can’t carry out a risk assessment until they know what they are using, he explained. ‘For years, the first conversation we’ve had with lots of large firms is when we ask them to list their AI systems and how they are using them. That turns out to be a hard question to answer,’ he said.

In terms of risk assessment, it is difficult to argue that it’s not worth doing, he explained. For reporting and remediation, organisations might be doing all the right things but if there is an issue with a regulator or customer they need to be able to go back to them and say they have followed the proper policy and have the documentation to prove it. ‘Why wouldn’t you do that, even if you are not directly subject to the Act?’ Kerrigan asked.

Compliance will play a leading role

Kerrigan said compliance teams will play a key role in implementing AI systems, along with organisations’ internal and external lawyers. RegTech tools could be the best way to approach these new technologies, especially where organisations have up to 10,000 AI deployments that can’t be checked manually one at a time.



Kerrigan emphasised that it starts with identifying and listing the AI deployments within the organisation. Some of these are brought into the business without having an ‘AI badge’ on them, he said, so some are more apparent than others. ‘We already write policies for people, regardless of the Act,’ he said. ‘We’re supporting people to say, How does this fit into our usual methodology for ensuring that we’re compliant with direct and indirect regulation? A lot of that is having policies that relate to the organisational view of the use of the technology.’

Some organisations, he said, want to be at the cutting-edge here, while others prefer to take a more conservative approach, and that can change over time.

It’s all about people

Kerrigan concluded by focusing on the human aspect of the Act.

‘It’s a compliance role,’ he said. ‘It’s about having policies, human oversight, reporting. It’s about recordkeeping.’

Organisations should consider how people need to access the services and how these services can be built and delivered. ‘It’s an opportunity to be able to go to market and say to your customers that we are safe adopters of AI. Look what it’s done for our business, look how you’re benefiting from it. And look how there is human oversight.’

The EU’s philosophy behind the Act, he said, is based on human rights and human dignity – a phrase, he pointed out, that often crops up in the EU’s literature on the Act. ‘You and your organisation can be known to be ahead of the curve in safe adoption, in ethical adoption. That’s one of the things, I think, that is super interesting for the community.’


The full webinar Decoding the EU AI Act: A framework for compliance? is available to ICA members via our Learning Hub.

For more information and to sign up to our upcoming ICA webinars, visit our events page.