United Kingdom police facial-recognition tools incorrect in more than 90pc of cases

The accuracy of police facial recognition systems has been criticised by a UK privacy group

That figure is the highest of those given by United Kingdom police forces surveyed by the campaign group Big Brother Watch as part of a report that urges the the police to stop using the tech immediately.

United Kingdom police facial recognition is lawless, undemocratic, and dangerously inaccurate.

He continued: "We're seeing ordinary people being asked to produce ID to prove their innocence as police are wrongly identifying thousands of innocent citizens as criminals".

While she welcomed both the recent appointment of a member of the NPCC to govern the use of facial recognition in public spaces and the establishment of an oversight panel including herself, Biometrics Commissioner and the Surveillance Camera Commissioner, Denham also noted that she is "deeply concerned about the absence of national level co-ordination in assessing the privacy risks and a comprehensive governance framework to oversee FRT deployment".

The privacy group also said that: "automated facial recognition technology is now used by United Kingdom police forces without a clear legal basis, oversight or governmental strategy".

The group submitted freedom of information (FoI) requests to every police force in the United Kingdom to find out the extent that the police are trialing facial recognition methods.

South Wales Police is another law enforcement body that has been using facial recognition in day-to-day cases, but it only recorded 9 per cent accuracy - better than the Met, but not by much.

Police have begun using automated facial recognition in city centres, at political demonstrations, sporting events and festivals over the past two years.

The Metropolitan Police said that "all alerts against the watch list are deleted after 30 days", adding that any "faces in the video stream that do not generate an alert are deleted immediately".

How have the police forces responded?

"When we first deployed and we were learning how to use it. some of the digital images we used weren't of sufficient quality", said Deputy Chief Constable Richard Lewis.

For instance, a developer of a content filtering AI system may claim that they are able to identify a high percentage of terrorist content on the web, but only when they already know that the content they're analyzing is terrorist content.

"If an incorrect match has been made, officers will explain to the individual what has happened and invite them to see the equipment along with providing them with a Fair Processing Notice".

It said a "number of safeguards" prevented any action being taken against innocent people. One was matched incorrectly on the watch list, and the other was on a mental health-related watch list.

This is basically what happened with the Met Police's facial recognition system, too. However, the British people will need to decide whether or not they want to live in a world where they are continuously watched, intrusively surveilled, and biometrically tracked, and think about how that may affect their fundamental rights.

Information Commissioner Elizabeth Denham said police had to demonstrate that facial recognition was "effective" that no less intrusive methods were available. "Should my concerns not be addressed I will consider what legal action is needed to ensure the right protections are in place for the public".

The UK home office told the BBC it plans to publish its biometric strategy in June.

Related:

Comments


Other news