The UK Information Commissioner’s Office (ICO) has launched an artificial intelligence (AI) and biometrics strategy, which the regulator says will support innovation while protecting people’s data rights.
Published on 5 June 2025, the strategy highlights how the ICO will focus its efforts on technology use cases where most of the risks are concentrated, but where there is also “significant potential” for public benefit.
This includes the use of automated decision-making (ADM) systems in recruitment and public services, the use of facial recognition by police forces, and the development of AI foundation models.
Specific actions outlined for these areas include conducting audits and producing guidance on the “lawful, fair and proportionate” use of facial recognition by police, setting “clear expectations” for how people’s personal data can be used to train generative AI models, and developing a statutory code of practice for organisations’ use of AI.
The regulator added it would also consult on updating guidance for ADM profilers, working specifically with early adopters such as the Department for Work and Pensions (DWP), and produce a horizon scanning report on the implications of agentic AI that is increasingly capable of acting autonomously.
“The same data protection principles apply now as they always have – trust matters, and it can only be built by organisations using people’s personal information responsibly,” said information commissioner John Edwards at the launch of the strategy. “Public trust is not threatened by new technologies themselves, but by reckless applications of these technologies outside of the necessary guardrails.”
The strategy also outlined how – because “we consistently see public concern” around transparency and explainability, bias and discrimination, and rights and redress – these are areas where the regulator will focus its efforts.
On AI models, for example, the ICO said it will “secure assurances” from developers around how they are using people’s personal information so people are aware, while for police facial recognition, it said it will publish guidance clarifying how it can be deployed lawfully.
Police facial recognition systems will also be audited, with the findings published to assure people that the systems are being well governed and their rights are being protected.
Artificial intelligence is more than just a technology change – it is a change in society. But AI must work for everyone … and that involves putting fairness, openness and inclusion into the underpinnings Dawn Butler, AI All Party Parliamentary Group
“Artificial intelligence is more than just a technology change – it is a change in society. It will increasingly change how we get healthcare, attend school, travel and even experience democracy,” said Dawn Butler, vice-chair of the AI All Party Parliamentary Group (APPG), at the strategy’s launch. “But AI must work for everyone, not just a few people, to change things. And that involves putting fairness, openness and inclusion into the underpinnings.”
Lord Clement-Jones, co-chair of the AI APPG, added: “The AI revolution must be founded on trust. Privacy, transparency and accountability are not impediments to innovation – they constitute its foundation. AI is advancing rapidly, transitioning from generative models to autonomous systems. However, increased speed introduces complexity. Complexity entails risk. We must guarantee that innovation does not compromise public trust, individual rights, or democratic principles.”
According to the ICO, negative perceptions of how AI and biometrics are deployed risks hampering their uptake, noting that trust in particular is needed for people to support or engage with these technologies, or the services powered by them.
It noted that public concerns are particularly high when it comes to police biometrics, the use of automated algorithms by recruiters, and the use of AI to determine people’s eligibility for welfare benefits.
“In 2024, just 8% of UK organisations reported using AI decision-making tools when processing personal information, and 7% reported using facial or biometric recognition. Both were up only marginally from the previous year,” said the regulator.
“Our objective is to empower organisations to use these complex and evolving AI and biometric technologies in line with data protection law. This means people are protected and have increased trust and confidence in how organisations are using these technologies.
“However, we will not hesitate to use our formal powers to safeguard people’s rights if organisations are using personal information recklessly or seeking to avoid their responsibilities. By intervening proportionately, we will create a fairer playing field for compliant organisations and ensure robust protections for people.”
In late May 2025, an analysis by the Ada Lovelace Institute found that “significant gaps and fragmentation” in the existing “patchwork” governance frameworks for biometric surveillance technologies means people’s rights are not being adequately protected.
While the Ada Lovelace Institute’s analysis focused primarily on deficiencies in UK policing’s use of live facial recognition (LFR) technology – which it identified as the most prominent and highly governed biometric surveillance use case – it noted there is a need for legal clarity and effective governance for “biometric mass surveillance technologies” across the board.
This includes other forms of biometrics, such as fingerprints for cashless payments in schools, or systems that claim to remotely infer people’s emotions or truthfulness, as well as other deployment scenarios, such as when supermarkets use LFR to identify shoplifters or verification systems to assure people’s ages for alcohol purchases.
Both Parliament and civil society have made repeated calls for new legal frameworks to govern UK law enforcement’s use of biometrics.
This includes three separate inquiries by the Lords Justice and Home Affairs Committee (JHAC) into shoplifting, police algorithms and police facial recognition; two of the UK’s former biometrics commissioners, Paul Wiles and Fraser Sampson; an independent legal review by Matthew Ryder QC; the UK’s Equalities and Human Rights Commission; and the House of Commons Science and Technology Committee, which called for a moratorium on LFR as far back as July 2019
However, while most of these focused purely on police biometrics, the Ryder review in particular also took into account private sector uses of biometric data and technologies, such as in public-private partnerships and for workplace monitoring