Skip to content

Inadequate safeguards in the UK's facial recognition system raise concerns over privacy and potential misuse among the public.

Warnings Issued by Ada Lovelace Institute on Unchecked Use of Facial Recognition Technology

Unregulated facials recognition technology poses significant risks, as cautioned by the esteemed...
Unregulated facials recognition technology poses significant risks, as cautioned by the esteemed Ada Lovelace Institute.

Inadequate safeguards in the UK's facial recognition system raise concerns over privacy and potential misuse among the public.

29 May 2025

The Ada Lovelace Institute, a esteemed think tank focused on data and AI ethics, has issued a warning on the potential hazards posed by unregulated facial recognition technology, citing the UK's fragmented governance as a significant concern.

The report, released today, claims the current UK approach to regulating facial recognition technology and biometrics is heavily fragmented, leading to a legal grey area that erodes public trust, privacy, and accountability. Urging immediate legislative action, the Ada Lovelace Institute argues that clarification is needed to address the sector's rapid growth across public and private domains.

According to Nuala Polo, the UK public policy lead at the Ada Lovelace Institute, existing oversight mechanisms are insufficient to protect individuals from significant risks posed by biometric technologies. She asserts that it is not feasible to argue there is a substantial legal framework in place.

The use of facial recognition technology is not confined to the police; it has been introduced in schools, supermarkets, railway stations, and even with claims to detect emotions, attention levels, and honesty. "There is no specific law providing a clear basis for the use of live facial recognition and other biometric technologies," Polo cautioned, adding that the current ad-hoc approach is inadequate, especially in view of the courts' expectations for robust safeguards and transparency.

The significance of the warning comes on the heels of the 2020 Court of Appeal ruling in Bridges v South Wales Police, which rendered the police's use of facial recognition technology unlawful due to fundamental legal deficiencies. Since then, guidance, principles, and voluntary guidelines have been issued, but the Ada Lovelace Institute contends that these measures are ineffective, with policing governance frameworks being insufficient even within that domain.

Outside the police domain, regulation is even less developed, with uncertainty surrounding the legality of many private-sector applications. Michael Birtwistle, associate director at the Ada Lovelace Institute, notes the 'doubly alarming' situation - the inadequate framework for police use of facial recognition exposes the broader regulatory regime's unpreparedness as deployment expands.

The report emphasizes the emerging field of affective computing, in which biometric tools claim to infer people's emotional or mental states based on physiological or behavioral data. The efficacy and ethical implications of such technologies are topics of ongoing debate.

To address these challenges, the Ada Lovelace Institute proposes a new legislative framework that would impose tiered legal obligations based on the risk profile of each biometric application. In this model, an independent regulator would be empowered to issue binding codes of practice for different contexts of use. The Institute emphasizes that, without proper safeguards and a clear legal framework, the rapid roll-out of these technologies risks undermining accountability, transparency, and public trust, while inhibiting deployers from understanding how to deploy safely.

According to Tom Brookes, senior associate at global law firm Ashurst, the use of facial recognition technologies requires greater clarity and cross-regulatory coherence to ensure they can be used in a proportionate and legally compliant manner.

  1. The Ada Lovelace Institute's report on facial recognition technology highlights the need for immediate policy-and-legislation action, as the current approach in the UK leaves a legal grey area that undermines public trust, accountability, and privacy in medical-conditions, general-news, technology, and politics.
  2. Nuala Polo, the UK public policy lead at the Ada Lovelace Institute, points out that existing oversight mechanisms are insufficient to protect individuals from significant risks posed by biometric technologies, such as artificial-intelligence and data-and-cloud-computing, used in various domains like schools, supermarkets, and railway stations.
  3. The report also addresses the emerging field of affective computing, where biometric tools claim to infer people's emotional or mental states, sparking ongoing ethical debates over science and the general-news landscape.
  4. To ensure proportionate and legally compliant use of facial recognition technologies, Tom Brookes, senior associate at global law firm Ashurst, advocates for greater clarity and cross-regulatory coherence, calling for a new legislative framework with tiered legal obligations and an independent regulator empowered to issue binding codes of practice.

Read also:

    Latest