Chat with us, powered by LiveChat

Ethical compliance for facial recognition technology

Image related to Ethical compliance for facial recognition technology

By Manorama Kulkarni, 5 February 2024

Given the robust ethical and authorised duty to ensure effective fortification of fundamental rights in the context of developing technology risks, including threats posed by artificial intelligence (AI)-enabled facial recognition, national and international lawmakers should develop policy strategies that allow their constituents to understand and influence the deployment of this technology.

Facial recognition technology (FRT) has emerged as a controversial method to recognise individuals of interest in investigations. FRT leverages a uniquely sensitive biometric trait that is both immutable and unprotected to the public, meaning unfettered use of the technology in law enforcement creates risk for human rights.

Even in countries with a strong commitment to civilian liberties, FRT-specific regulations are essential to enforce human rights. A key area to address is the knowledge gap between trendsetters and the public, as well as their elected representatives, which creates a concerning information asymmetry. Future approaches should be developed to bridge this gap and create legislation driven by informed public preferences and specific to the risks posed by FRT to ensure the respect of human rights in the era of AI.

Use cases

Facial recognition eliminates the need for manual identification and increases security.

Biometric technology uses AI algorithms to identify, verify, and authenticate persons based on their facial features. It has an extensive range of applications, from fraud prevention to know your customer.

In recent years, FRT has become prevalent among trades seeking to secure their operations and protect their assets. AI can verify IDs via selfies, understand the aliveness of images, recognise faces from security cameras, and more. It maps facial features from a photograph or video and then compares the information with a database of known faces to find a match.

Regulatory landscape

The regulatory environment around facial recognition has become progressively disjointed.

Regulations are starting to proliferate at all levels of government through restrictions on uses of the technology by government agencies, limits on private-sector applications, and rules about utilisation of biometric data. This disintegration reproduces and contributes to a lack of agreement on the right way to regulate facial recognition, or how to assess the prescriptive implications of its practice.

FRT is currently regulated in three ways:

1. Regulation of specific applications of FRT by supervision agencies
2. Rules limiting operation of facial recognition by the private sector
3. Regulations that govern both the public and private sector

Examples at the country level include:

India: Personal Data Protection Bill 2019 classified use of FRT as sensitive personal data. There is a provision in Clause 34 that states sensitive personal data may be transferred outside India but must be stored in India. Nevertheless, transfer of sensitive personal data is subject to certain conditions.

Australia:
 The Privacy Act governs the collection and handling of personal information, including biometric data. The Office of the Australian Information Commissioner provides guidance on privacy considerations related to facial recognition.

United Kingdom: The post-Brexit United Kingdom has its own data protection laws. The Information Commissioner’s Office provides guidance on the use of FRT, emphasising compliance with data protection principles.

Canada: Privacy laws, including the Personal Information Protection and Electronic Documents Act, apply to use of FRT. The Office of the Privacy Commissioner of Canada issued guidelines on the use of biometric technology.

China: China has rapidly adopted FRT for various purposes, including surveillance and authentication. However, there are concerns about privacy implications, and the country has begun to introduce guidelines and regulations to address the matter.

United States: There is no comprehensive federal law specifically regulating FRT. However, individual states, like Illinois, have enacted biometric privacy laws. There have been discussions at the federal level about potential regulations.

European Union: The General Data Protection Regulation applies and includes provisions related to the processing of biometric data, including facial recognition. Organisations must have a legal basis for processing such data, and individuals have rights regarding the processing of their biometric information.

Globally, there are no standardised human rights frameworks or regulatory requirements that can be easily applied to FRT rollout. In the meantime, data protection impact assessments and human rights impact assessments, together with greater transparency, regulation, audit, and explanation of FRT use and application in individual contexts, can improve FRT deployments.

Policies and procedures

Beyond regulation, there are non-regulatory influences on the use of facial recognition in different contexts.

For example, courts in various jurisdictions have determined that the use of facial recognition is subject to limits under existing human rights laws.

Major corporations creating FRT have proclaimed variations to their own internal policies that limit the conditions under which the technology can be deployed. Policymaking and governance on implementation are key non-regulatory means to have FRT in control across a global organisation, with greater focus on ethical ways of implementing the technology.

To ensure the ethical deployment of FRT in accordance with human rights, regulations should encompass the following considerations:

  • Assessment of risks: While FRT may address certain issues, it also introduces inherent risks to human rights, prompting ethical concerns. Before putting human rights at risk, it is imperative to conduct a thorough assessment of the risks and benefits associated with FRT applications.
  • Ethical value assessments/human rights influence: Prior to the utilisation of FRT by any government or private-sector entity, especially in situations where human rights could be affected, there should be a comprehensive evaluation of ethical values and human rights impact.
  • Stronger governance on import and export surveillance: In countries lacking regulations pertaining to FRT or defined policies and procedures, there is a need for enhanced governance concerning the import and export of surveillance technologies. This ensures a more robust framework for ethical FRT usage. 

The right to privacy is fundamental across the globe and should be respected in any FRT use case.

Ongoing concerns

FRT has raised significant concerns regarding potential infringements on human rights. One major downside is the threat to privacy, as individuals might unknowingly be subjected to surveillance without their consent or awareness. The mass collection and storage of facial data pose risks of unauthorised access, misuse, or even breaches, leading to the compromise of personal information.

Moreover, there is the potential for discriminatory practices, as facial recognition systems have demonstrated biases, particularly against certain demographics, ethnicities, or genders. This bias can result in unfair targeting and profiling, infringing upon principles of equality and non-discrimination.

The lack of clear regulations and guidelines for the ethical use of FRT further exacerbates these concerns, leaving room for abuse by governments or private entities. As a result, the deployment of facial recognition systems without robust safeguards poses a real threat to fundamental human rights, including privacy, equal treatment, and protection from unwarranted surveillance.




About the author

Manorama Kulkarni is an India-based executive professional with 23 years of experience in the financial crime and compliance arena, specialising in risk management, anti money laundering, regulatory compliance, and know your customer operations. Her career includes roles with industry giants such as TCS, Capgemini, and Deutsche Bank, where she contributed to the risk management portfolio and built and led high-performance teams.


This article has been republished with permission from Compliance Week, a US-based information service on corporate governance, risk, and compliance. Compliance Week is a sister company to the International Compliance Association. Both organisations are under the umbrella of Wilmington plc. To read more visit www.complianceweek.com