Risk, Compliance and Cultural Change Series

Image related to Risk, Compliance and Cultural Change Series


Written by Paul Eccleson on Monday 1 August, 2022

Behavioural science: A tool for successful cultural change

Be honest – risk and compliance can sometimes feel like an uphill struggle.

Many of you will have experienced commercial colleagues ignoring your advice, operations staff trying to work around controls, or senior managers secretly wishing that regulatory demands and ethical considerations simply didn’t exist. The compliance professional’s lot, it sometimes feels, is a Sisyphean endeavour.

In this series on culture change, my aim is to demonstrate how studying human behaviour can help alleviate some of the challenges of our profession, helping you, your colleagues and your firm.

But before I do, I first have to admit that this is a series also born of frustration. Let me explain.

Behavioural insights

The tools traditionally deployed by organisations to achieve cultural change are blunt instruments. Performance evaluations, bonuses, change management training, staff surveys and intranet sites crammed full of policies form the core of most firms’ armouries. These tools have been used for many years, and their operational track record isn’t great. Indeed, they can seem like medieval forms of medicine – well intentioned, but without any grounding in science.

Real cultural change requires an understanding of the drivers of human behaviour. And the most effective means of grasping these drivers is through behavioural science.

And this is where my frustration comes in. Few organisations consult behavioural science when seeking to shape their internal culture. Why are these well-founded techniques not more widely used? The British government has employed a ‘Behavioural Insights Team’ for over a decade. This unit works on how best to implement government policies using insights from behavioural science,[1] and has applied its findings in a range of interventions, from improving vaccination take-up and GP cancer referrals to boosting exam results and encouraging green investment. The science offers simple and cost-effective interventions which can dramatically improve outcomes and even save lives.

A practical example – policies

Behavioural insights can help design systems that work with human beings rather than against them. Consider your firm’s policies: do they say things like ‘documents containing sensitive personal information must not be saved to a shared drive on our network’? Such a policy is setting colleagues up to break the rule – after all, they may not know that certain files contain personal data, nor may they know what constitutes ‘sensitive’ personal data.[2] So expecting them to acknowledge the security settings on shared drives seems like a big ask. If we do not anticipate the rule to be complied with, then why do we write it?[3]

What if we designed our systems and processes to mitigate these human risks? If we are aware that our colleagues have a tendency to store personal data in openly shared areas, why not employ automated controls to subvert that human behaviour? File scans for zip codes, automated document mark-up, data leak protection to prevent them being emailed – all of these controls exist and have been used successfully, but, for many of you reading this, I’d bet that the only controls that you have are an aspirational policy and a training programme for the pesky IT users who keep doing this.

Applying the science

Very few risk and compliance teams use behavioural science techniques to influence culture. In the same way that compliance with government policy can be improved through informed intervention, so can embedding risk and compliance goals within an organisation.

In this series, I aim to challenge assumptions on business that I believe are without a solid, scientific foundation. These misconceptions include:

  • staff behave in a way that is consistent with their expressed attitudes
  • staff go through a ‘change curve’ when we re-design our strategy or organisation
  • mistakes, failures, non-compliance, rule breaches and process slip ups are most commonly caused by ‘human error’
  • employees are rational adults, and will align with our policies and culture as long as we clearly communicate them
  • ethical business is a concept that we can all buy into and strive towards, and
  • AI can replicate and replace increasingly complex roles in our business.

Now you might consider all or some the above as undeniable truisms. But I will argue that they have no basis in the psychological literature. If anything, the evidence suggests the opposite:

  • colleagues will say one thing and do another, especially if you ask them moral questions
  • they often don’t move through a ‘change curve’ from anger to acceptance
  • attributing a problem to ‘human error’ is diagnostically lazy
  • staff do not behave in a rational manner and ‘treating them like adults’ won’t help
  • ethical values are irrational and ‘communication’ won’t solve that, and
  • AI absolutely should not replicate what we do; it must be better than us in important ways.

Listening to psychology

Most organisational goals are noble in intention. Helping our people through traumatic change, embedding ethical values, designing work environments with ‘human factors’ and modernising organisations with powerful technology are, without doubt, laudable aims. And it is certainly the case that building a compliant, risk-managed, ethical culture is the right thing to strive towards.

We are, however, pursuing these goals in the wrong way. I aim to demonstrate that by listening to what psychology tells us about being human, we can create human-focused organisations that achieve those goals. To do this, we must abandon axioms and instead listen to what psychology tells us will work.

I hope you enjoy the series.


You may also like:


[1] The UK’s Behavioural Insights consultancy team website gives numerous examples of compliance behavioural change on a mass scale. See: The Behavioural Insights Team: https://www.bi.team/bit10/ – accessed May 2022

[2] For instance, the regulatory definition of ‘Special Category Data’ in the EU includes whether or not a person is a member of a trade union, but excludes their bank account details.

[3] I would suggest that the answer is ‘because it’s defensive’. If something disastrous happens to personal data on our systems because of such failures, we can point to the policy and reach a ‘human error’ conclusion.