Skip to content

AI facial recognition is spreading in UK streets and shops — what ordinary people should know

Retro-futurist 1950s-style illustration of a British high street with friendly-looking cameras, shoppers, a police noticeboard and a thoughtful family discussing privacy, optimistic comic-book magazine style, no text, no speech bubbles, no captions, no signage.

Facial recognition is one of those AI technologies that can feel distant until it appears in a place you already know: a railway station, a busy shopping street, a football ground or the entrance to a shop. It is no longer only a science-fiction idea or a specialist police tool. It is becoming part of ordinary public life.

The Guardian reports that Britain’s biometrics watchdogs are warning that national oversight of AI-powered face scanning is lagging behind the speed at which the technology is being used. The report says the Metropolitan police has scanned more than 1.7 million faces in London so far this year, while retailers are also using face-scanning systems to try to tackle shoplifting and other crime.

The concern is not simply that the technology exists. It is that the rules, checks and routes for complaint may not be keeping pace with how quickly it is spreading. For ordinary UK readers, this matters because facial recognition is not an app you choose to download. It can affect you while you are walking down the street, entering a shop or attending an event.

What facial recognition is trying to do

Live facial recognition systems compare faces captured by cameras against a watchlist. In policing, that might mean looking for people wanted by the courts or suspected of serious offences. In retail, it might mean trying to identify people a company believes are linked to theft, abuse or repeat incidents.

Supporters argue that the technology can help find dangerous people faster, protect staff and reduce crime. That is why some police forces and retailers are interested in it. But the everyday question is more complicated: who is on the watchlist, how accurate is the match, who checks mistakes, how long is data kept, and what happens if an innocent person is wrongly flagged?

Those questions are especially important because a face is not like a password. You cannot change it after a breach or mistake. Biometric data is personal, permanent and difficult to avoid sharing in public spaces.

Why oversight matters

The Guardian’s report quotes watchdog concerns about a patchwork legal framework and the need for clearer rules on when and how live facial recognition should be used. It also describes people who say they were wrongly identified by systems used by police or shops, leaving them unsure how to challenge what happened.

That is the part ordinary people should pay attention to. A technology can be useful in some circumstances and still need strong limits. Good oversight should answer practical questions: was the system tested properly, does it work fairly across different groups, are watchlists accurate, are staff trained, and can a member of the public complain without being passed from one organisation to another?

ManyHands has previously covered why biometric and personal data deserve careful handling. Facial recognition makes that issue more public. Instead of deciding whether to upload a voice clip or photo to a service, people may be scanned simply because they are in a particular place.

What should you look for in real life?

First, look for signs. Organisations using facial recognition should be clear about it, especially in shops, venues and public deployments. A vague “CCTV in operation” sign is not the same as a clear explanation that face-matching technology is being used.

Second, pay attention to what happens if you are stopped or challenged. If a shop, venue or security worker says you have been matched by a system, ask calmly what process is being followed, who made the decision and how you can challenge it. Do not argue about technical details on the spot if the situation is tense; focus on getting names, times, locations and a route for complaint.

Third, remember that AI matches are not certainty. A match is a signal that humans should handle carefully, not proof that someone has done something wrong. That distinction matters whether the system is used by police, a private security firm or a retailer.

What policymakers and companies need to get right

The basic bargain should be simple: if organisations want to use powerful AI in public spaces, the public should get clear rules, visible notices, independent testing and a meaningful way to challenge mistakes. That is not anti-technology. It is how trust is built.

For companies, the temptation will be to treat facial recognition as another security camera upgrade. It is more sensitive than that. Retailers need to think about bias, staff misuse, data sharing, watchlist quality and whether the response to a match is proportionate. Police forces need to show not only that the tool can find suspects, but that it is used within clear safeguards and independent scrutiny.

The same lesson applies to many AI systems we use at home and work: convenience should not quietly remove human judgement. We have written before about why human supervision still matters when AI tools are given real-world access. Facial recognition raises the stakes because the real world is not a tidy app window. It is a street full of people who may never have opted in.

The practical takeaway

Facial recognition will probably become more common before the rules feel settled. That does not mean everyone needs to panic, but it does mean people should notice where it is used and expect better explanations from the organisations deploying it.

If you see facial recognition in a shop, station or event space, look for clear information about who is running it and why. If you are wrongly flagged, keep a record and use formal complaint routes. And when politicians, police forces or retailers describe the technology as a simple crime-fighting tool, ask the follow-up questions: how accurate is it, who checks it, what happens to the data, and what rights does an ordinary person have when the AI gets it wrong?

AI in public places should make people safer without making them feel powerless. That balance is exactly why oversight matters now, not after the technology has already become normal.