Police in London are moving ahead with a deploying a facial recognition camera system despite privacy concerns and evidence that the technology is riddled with false positives.
The Metropolitan Police, the U.K.’s biggest police department with jurisdiction over most of London, announced Friday it would begin rolling out new “live facial recognition” cameras in London, making the capital one of the largest cities in the West to adopt the controversial technology.
The “Met,” as the police department is known in London, said in a statement the facial recognition technology, which is meant to identify people on a watch list and alert police to their real-time location, would be “intelligence-led” and deployed to only specific locations. It’s expected to be rolled out as soon as next month.
However, privacy activists immediately raised concerns, noting that independent reviews of trials of the technology showed a failure rate of 81%. “The police have decided against a backdrop of serious public concern to press ahead with facial recognition anyway,” Silkie Carlo, director of Big Brother Watch, a civil liberties group, told TIME on Friday. “It suggests to me either inexplicable incompetence or ideological commitment to increasing mass surveillance in the capital.”
A judge recently ruled that the use of facial recognition in the U.K. was legal, but Big Brother Watch has launched an appeal against that decision. “This decision represents an enormous expansion of the surveillance state and a serious threat to civil liberties in the U.K.,” Big Brother Watch said of the Metropolitan Police’s announcement.
The U.K. has more surveillance cameras per person than any country in the world except China. British citizens are famously relaxed about that coverage when that surveillance comes in the form of closed-circuit television (CCTV), but Carlo, who has attended several police trials of facial recognition cameras by London police, says people are often confused and concerned when they find out cameras are using facial recognition technology.
“Turning surveillance cameras into identity checkpoints is the stuff of nightmares,” Carlo wrote in TIME last year, in response to public trials of the technology. “For centuries, the U.K. and U.S. have entrenched protections for citizens from arbitrary state interference — we expect the state to identify itself to us, not us to them. We expect state agencies to show a warrant if our privacy is to be invaded. But with live facial recognition, these standards are being surreptitiously occluded under the banner of technological ‘innovation.'” In May 2019, San Francisco banned the use of facial recognition technology over privacy fears.
London police said Friday that the live facial recognition cameras would “help tackle serious crime, including serious violence, gun and knife crime, child sexual exploitation and help protect the vulnerable.” They also said that the decision to stop people identified by the cameras would always remain with human officers.
But in the statement announcing the rollout of live facial recognition, police made no reference to the technology’s accuracy. That, Carlo alleges, is because the technology has such a high error rate. “An independent review looked at six trials, and found that all of the alerts generated by the facial recognition system, 81% were misidentifications,” Carlo told TIME on Friday. “The announcement today shows the trial period was never serious.”
The Metropolitan Police did not immediately respond to questions from TIME about the technology’s failure rate.
Police are presenting the decision as a way of increasing the efficiency of their policing while making sure the public are aware of the use of facial recognition technology. “The Met will begin operationally deploying [live facial recognition] at locations where intelligence suggests we are most likely to locate serious offenders,” the Met’s Assistant Commissioner Nick Ephgrave said in a statement. “Each deployment will have a bespoke ‘watch list’, made up of images of wanted individuals, predominantly those wanted for serious and violent offenses.”
Facial recognition cameras, Ephgrave said, would be “clearly signposted” and accompanied by officers handing out leaflets about the use of facial recognition technology. He promised that the technology was a “standalone system” not linked to any other imaging system, such as CCTV. But that hasn’t convinced some privacy advocates.
“So many things about it don’t make any sense,” Carlo told TIME on Friday. “From an operational point of view, they use a van, they have a team of plainclothes officers. In all seriousness, this is not the destination. This is a step toward using live facial recognition with CCTV cameras. The way it’s used at the moment isn’t sustainable. I fear that it’s the beginning of something even more sinister.”
- Essay: The Tyre Nichols Videos Demand Solemnity, Not Sensationalism
- For People With Disabilities, Losing Abortion Access Can Be a Matter of Life or Death
- Inside the Stealth Efforts to Smuggle Starlink Internet Into Iran
- Natasha Lyonne on Poker Face and Creating Characters Who Subvert Leading-Lady Tropes
- How to Help the Victims and Community After the Monterey Park Shooting
- Why Grocery Staples Are So Expensive Right Now
- Quantum Computers Could Solve Countless Problems—and Create a Lot of New Ones
- Where to Watch All of the 2023 Oscar Nominees
- How to Be Mindful if You Hate Meditating