The Business & Technology Network
Helping Business Interpret and Use Technology
S M T W T F S
 
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
 
28
 
29
 
30
 
31
 
 

Cops Are Still Bypassing Facial Recognition Controls To Build Cases Based On Bad Matches

DATE POSTED:January 24, 2025

The Detroit PD has made this sort of thing its unofficial brand. It has even shelled out at least $300,000 to ensure people most often think of the Detroit PD when discussing false arrests aided and abetted by facial recognition tech.

But there are plenty of others in the US law enforcement industrial complex vying for the “Most False Arrests” title. It’s genuinely disheartening that cop shops seem not only incapable, but deliberately unwilling, to learn from the mistakes of others in their field.

A new report from the Washington Post, based on a thorough scouring of public records, shows there are other agencies willing to make the same mistakes — all because it’s so much easier to go with your first digital hunch than it is to do real police work.

After two men brutally assaulted a security guard on a desolate train platform on the outskirts of St. Louis, county transit police detective Matthew Shute struggled to identify the culprits. He studied grainy surveillance videos, canvassed homeless shelters and repeatedly called the victim of the attack, who said he remembered almost nothing because of a brain injury from the beating.

Months later, they tried one more option.

Shute uploaded a still image from the blurry video of the incident to a facial recognition program, which uses artificial intelligence to scour the mug shots of hundreds of thousands of people arrested in the St. Louis area. Despite the poor quality of the image,the software spat out the names and photos of several people deemed to resemble one of the attackers, whose face was hooded by a winter coat and partially obscured by a surgical mask.

Though the city’s facial recognition policy warns officers that the results of the technology are “nonscientific” and “should not be used as the sole basis for any decision,” Shute proceeded to build a case against one of the AI-generated results: Christopher Gatlin, a 29-year-old father of four who had no apparent ties to the crime scene nor a history of violent offenses, as Shute would later acknowledge.

This so-called “investigation” also involved the (so-called) detective questioning the victim of the crime — a 62-year-old security guard who suffered a concussion during the attack and was only conscious for part of the assault. Nonetheless, Shute and his fellow detectives approached the man who had been diagnosed with brain trauma with a stack of printouts, one of which was the person AI had apparently declared to be the “best” match for one of the suspects, despite the poor quality of the input image.

When the security guard first picked out someone other than Gatlin, the officers steered him away from this selection, asking him to keep looking at the photos until he finally picked the one the officers wanted him to pick: the grainy image generated from the still shot of security camera feed.

Even Detective Shute admitted this was the wrong thing to do in court.

Asked months later in court whether steering the witness in this way had been proper, the lead detective, Shute, would answer “no.”

To get to this point, the St. Louis investigators not only ignored guidance from their tech provider that said generated “matches” shouldn’t be considered probable cause, they ignored the PD’s own guidelines, which said the same thing but were considerably more legally-binding. Nothing much will happen to these cops. But it took Christopher Gatlin more than two years to clear his name. Sixteen months of that time was spent in jail.

St. Louis is another data point. So is Woodbridge, New Jersey, where this happened:

In one example of the potent power of facial recognition, police in Woodbridge, New Jersey, arrested Nijeer Parks, a robbery suspect they found through facial recognition in 2019, even though DNA and fingerprint evidence collected at the scene clearly pointed to another potential suspect, according to documents produced in a lawsuit Parks later filed against the police department. 

That alone would be bad enough. But here’s the real punchline:

The Post could find no indication that the man who was a match for the DNA and fingerprint evidence was ever charged.

As the tech becomes more prevalent, so do the abuses:

In several cases reviewed by The Post, officers arrested a person identified by AI after only using their own judgment that it was the same person or by showing a photo of the suspect to witnesses, without otherwise linking the person to the crime, records show.

Why aren’t we hearing more about these facial recognition tech-enabled false arrests? It’s probably not because they’re that uncommon. It’s more likely due to the fact that the use of this tech goes unmentioned in warrant requests and charging documents. Instead, investigators refer vaguely to suspects “being identified” without specifying who (or what) provided the alleged positive identification.

In another case, the tech correctly matched the security cam footage image uploaded by investigators. The problem here was that it was the wrong image. This led to Miami PD investigators arresting an innocent man who just happened to be in line at the bank on the same day another person entirely was engaged in an act of fraud in the same bank. No one from the PD questioned the match delivered to them by their tech. Worse, they never bothered to check any other source of possible evidence, like bank records or recording time stamps, that might have indicated they were targeting the wrong person.

Pretty much everything moves faster than police reform. Tech adoption will always outpace discipline for breaking internal investigation rules. This will get worse before it gets any better. And it’s not necessarily a given that it will get any better. Settlements have been paid and official statements made long after litigation has ended, but we’re still a long, long way from being shown law enforcement can be trusted with their shiny new toys.