US: Police rely solely on facial recognition results
Some police departments in the US rely solely on facial recognition technology to identify potential suspects. Their use of the technology sometimes replaces standard investigative methods – and fails to follow the departments’ own guidelines, the Washington Post reports. For those who are falsely accused of a crime, this can have far-reaching consequences.
As the Post reports, some US police departments see facial recognition as a kind of investigative shortcut – a tool that allows them to identify and arrest suspects even in the absence of corroborating evidence. According to the Post’s reporting, at least eight people in the US have been arrested to date after being falsely identified by the technology. In most cases the police could easily have established the person’s innocence, for example by checking their alibis or comparing distinctive physical features like tattoos.
The Post’s team of reporters requested records on the use of the controversial technology from more than 100 police departments in the US. 75 confirmed that they used facial recognition, and of these 75 some provided the journalists with additional records. Still, only 23 departments provided information which, according to the Post, was detailed enough to reconstruct their investigations. Of these 23, the Post found that 15 departments across 12 US states have arrested at least one person identified by facial recognition technology without gathering additional evidence – these include police in Austin, Detroit, Miami and St. Louis.
Blurry photo
In St. Louis, Missouri, officers were searching for two individuals who had assaulted a security guard. In the course of the investigation a photo taken by a surveillance camera was run through a facial recognition program. The photo was blurry and the face of the person in it was partially obscured by a hood and a surgical mask. Nevertheless, the software produced the names and photos of several individuals that the attacker supposedly resembled.
The Post reports that St. Louis’s facial recognition policy characterizes the results of the technology as “nonscientific.” The policy also warns that results “should not be used as the sole basis for any decision.” Despite these warnings, police began to focus their investigation on 29-year-old Christopher Gatlin, whom the software had suggested as a possible assailant.
Gatlin’s photo was shown to the victim of the assault, along with photos of five other individuals – even though the victim had said that, because of the injuries he sustained, he couldn’t remember what his attackers looked like. Eventually Gatlin’s photo was selected. As the Post reports – and as the lead detective on the case later admitted in court – the photo lineup was handled improperly. The detective influenced the victim’s selection.
Gatlin was subsequently arrested and spent more than a year in jail awaiting trial.
Facial recognition instead of DNA evidence
The Post is aware of eight cases in which individuals were arrested after being falsely identified by facial recognition programs. In each of these cases, police skipped basic steps in their investigations – and apparently viewed the matches generated by the software as fact. In six cases the police didn’t bother to check the suspects’ alibis – even though doing so would have confirmed that they could not have committed the crime they were accused of.
In two cases, police even ignored evidence that would have exonerated the suspect. The case of Nijeer Parks, who was arrested by police in Woodbridge, New Jersey in 2019 on suspicion of robbery, seems especially egregious. In this case, police had collected DNA and fingerprint evidence at the scene of the crime that “clearly pointed to another potential suspect,” the Post reports. Instead of pursuing this suspect, officers ran a blurry photo through facial recognition software and arrested Parks. Last year the police department paid Parks a settlement of $300,000 – but admitted no wrongdoing.
Facial recognition technology is frequently criticized for being unreliable. According to this latest report by the Post, the systems used by police function well in a lab setting when using high-quality comparison photos. But as Katie Kinsey from the NYU School of Law told the paper, there has been no independent testing of how the technology performs in real-world conditions – for example with blurry photos taken by surveillance cameras. These low-quality photos are the kind that police tend to use in their investigations.
Federal tests conducted in the US in 2019 showed that facial recognition software works best at identifying white men. When identifying black or Asian people, the failure rate was up to 100 times higher. In the cases investigated by the Post, seven of the eight people who were wrongfully arrested were black.
The one white man in the group was accused of cashing a fraudulent $36,000 check. He had been correctly identified by facial recognition software – but his arrest was still unjustified. He had merely been a customer at the bank on the day the crime was committed and had cashed a legitimate check for $1,500. The police checked neither his bank account nor the time stamp on the surveillance footage, nor did they search for any other evidence. Prosecutors eventually dropped the case.
Afraid of the police
Each of the falsely identified individuals told the Post that they had suffered negative consequences as a result of their wrongful arrest – some lost jobs, for example. Some said they had to seek counseling for their children, who had had to watch their parent being arrested. “Most said they also developed a fear of police,” the Post reports.
In the fall of 2024 the Post reported that US police rarely inform suspects of the use of facial recognition. By failing to do so they deprive them of the opportunity to dispute the results of the unreliable technology.
In the case of Christopher Gatlin from St. Louis, Gatlin’s public defender learned that police had used facial recognition in her client’s police report – and then began researching the technology. She was able to argue that the comparison photo used by the police didn’t fit the department’s own standards. The court also ruled that the victim’s identification of Gatlin was not permissible, given that the detective had improperly influenced his choice. Because there was no other evidence against him, the charges against Gatlin were dropped – now he is suing those responsible for his arrest. (js)