Written by Neil Hodge, Compliance Week on Tuesday December 17, 2019
The UK’s data regulator, the Information Commissioner’s Office (ICO), has issued guidance to help organisations explain their use of — and reliance on — artificial intelligence (AI) in decision making and how such technology might impact the public.
Working alongside The Alan Turing Institute, the UK’s national institute for data science and AI, the ICO has launched a consultation on the joint draft guidance, called “Explaining decisions made with AI,” which aims to give organisations practical advice to help explain to individuals the processes, services, and decisions delivered or assisted by AI.
In its interim report released in June, the ICO found “context” was key to the explainability of AI decisions — with the majority of people stating that in contexts where humans would usually provide an explanation, explanations of AI decisions should be similar to human explanations. ICO research released in July shows over 50 percent of respondents were concerned about machines making complex automated decisions about them.
The ICO guidance consists of three parts:
Part 1: The basics of explaining AI defines the key concepts and outlines a number of different types of explanations about the use of AI in decision making, as well as the importance of enabling people to challenge the decisions that have been made.
Part 2: Explaining AI in practice helps organisations with the practicalities of explaining these decisions and providing explanations to individuals. This will primarily be helpful for the technical team in an organisation, although data protection officers and compliance teams will also find it useful.
Part 3: What explaining AI means for your organisation goes into the various roles, policies, procedures, and documentation organisations can put in place to ensure they are set up to provide meaningful explanations to affected individuals. This is primarily targeted at senior management teams, although data protection officers and compliance teams will also find it useful.
The draft guidance goes into detail about different types of explanations, how to extract explanations of the logic used by the system to make a decision, and how to deliver explanations to the people they are about. It also outlines different types of explanation and emphasises the importance of using inherently explainable AI systems.
Six types of explanation
The ICO has identified six main types of explanation about the use of AI in decision making:
Source: Information Commissioner’s Office
Additionally, the draft guidance lays out four key principles, rooted within the EU’s General Data Protection Regulation (GDPR), the ICO says organisations “must consider” when developing AI decision-making systems. They are:
The ICO’s guidance highlights the risks facing organisations for failing to inform the public about how technology-assisted decision making may impact them, such as regulatory action, reputational damage, and public distrust.
But it also raises the potential for risks to organisations that do explain how AI is being used. For example, providing too much information about AI-assisted decisions may lead to increased public distrust due to the complex, and sometimes opaque, nature of the process.
Similarly, too much disclosure may expose commercially sensitive information, while sharing personal data with third parties may violate the GDPR and other data laws. Organisations may also need to protect against the risk people may “game” or exploit their AI models if they know too much about the reasons underlying its decisions.
“The decisions made using AI need to be properly understood by the people they impact. This is no easy feat and involves navigating the ethical and legal pitfalls around the decision-making process built in to AI systems,” says Simon McDougall, the ICO’s executive director for technology policy and innovation.
The consultation runs until Jan. 24, 2020. The final version of the guidance will be published later next year. The ICO is accepting comments.
This article has been republished with permission from Compliance Week, a US-based information service on corporate governance, risk, and compliance. Compliance Week is a sister company to the International Compliance Association. Both organisations are under the umbrella of Wilmington plc. To read more visit www.complianceweek.com
Thank you. Your comment is awaiting moderation and should appear on the site shortly.
Required fields are not completed, please ensure all required fields (*) have been filled in properly.
You can leave the name empty should you wish to remain Anonymous.
Help and support
Alternatively contact us on: +44(0)121 362 7534 / firstname.lastname@example.org (Course information)
or +44(0)121 362 7533 / email@example.com (Enrolled learners)
or +44(0)121 362 7747 / firstname.lastname@example.org (Membership)
or +44(0)121 362 7657 / email@example.com (Assessment)
or +44 (0) 121 362 7503 / firstname.lastname@example.org (End Point Assessment)