US police rarely inform defendants about the use of facial recognition

A tablet running facial recognition software in the hands of a police officer
Police departments in seven US states are required by law to be transparent about their use of facial recognition. (Source: IMAGO / ZUMA Press Wire)

Police departments in the United States frequently employ facial recognition technology in criminal investigations – but don’t inform suspects identified by the technology about its use, a new investigation by the Washington Post reveals. Critics accuse police of a lack of transparency.

The team of reporters from the Washington Post requested documents relating to the use of the controversial technology from more than 100 police departments across the US. More than 40 departments in 15 states provided data – 30 departments also provided police reports from more than 1,000 investigations stretching back over the last four years.

According to the Post’s reporting, police often do not inform suspects who have been arrested that they were identified with the help of software – “denying them the opportunity to contest the results of an emerging technology that is prone to error.”

Even in reports that are available to the public, police in some cases obscured their use of facial recognition, stating instead that suspects were identified “through investigative means” or “by utilization of investigative databases.” According to the Post, the Coral Springs Police Department in Florida even “instructs officers not to reveal the use of facial recognition in written reports.”

Innocent people arrested

Defense lawyers and civil rights groups criticize this practice, arguing that “people have a right to know about any software that identifies them as part of a criminal investigation.” This is especially important in light of the fact that, in several documented instances, innocent persons have been arrested after being falsely identified by facial recognition.

The Post reports that the reliability of facial recognition “has been successfully challenged in a handful of recent court cases” throughout the US. As a result, some defense lawyers “posit that police and prosecutors are intentionally trying to shield the technology from court scrutiny.”

Cassie Granos, an assistant public defender in Minnesota, told the post that police probably “want to avoid the litigation surrounding liability of the technology.” One of her colleagues successfully argued in a case this year that facial recognition results should not be permitted at trial. The judge ruled that the software does not “consistently produce accurate results.”

Tests conducted by the National Institute of Standards and Technology have shown in the past that facial recognition software is “more likely to misidentify people of color, women and the elderly.”

Found by “the computer”

According to the Post, at least seven people in the US have been wrongfully arrested after being falsely identified by facial recognition software. Six of the seven were black. In every case, the charges against those arrested were later dropped.

Some of the defendants only found out about the use of facial recognition because officers happened to mention that “the computer” had found them.

The Post cites the case of Quran Reid, who spent six days in jail in 2022. He was accused of using stolen credit cards to buy luxury purses in Louisiana. Reid, who lives in Georgia, said he had never been to Louisiana.

A police officer in the state had written in a sworn affidavit that a “credible source” had brought Reid to his attention. “In fact,” writes the Post, “Reid was identified by facial recognition software that was fed a crime scene photo.” Charges were dropped after Reid’s lawyer pointed out that his client has a facial mole – while the alleged perpetrator did not.

Reid told the Post that when he was in jail he asked himself why he had been arrested. “You don’t even know where it’s coming from.” He has since filed a lawsuit against the Jefferson Parish Sheriff’s Office and the detective who issued a warrant for his arrest.

Some police departments in the US use facial recognition software to compare images taken by surveillance cameras at a crime scene with photos saved in databases, for example mug shots or driver’s license photos. According to the Washington Post, however, “there is no scientific consensus on what constitutes a match.” Because of this lack of consensus, there is wide variation between different products and the results they provide.

The controversial facial recognition software Clearview AI, for example, was used in an investigation in Ohio and produced a photo of basketball player Michael Jordan. In response to the Post’s inquiries, Clearview pointed out that the perpetrator was also included in the search results.

Barely regulated

There is no federal law in the US governing the use of facial recognition. According to the Post article, a few states and cities require that police departments be transparent about their use of the technology – “but even in these locations, the technology is either not being used that often or it’s not being disclosed.”

Florida has no such transparency requirements. Police in Miami did however provide the Post with data relating to the use of facial recognition over the past four years. According to the data, in that time the Miami Police Department ran 2,500 searches with the software. These “led to at least 186 arrests and more than 50 convictions.” But among those arrested, less than 7 percent were informed that facial recognition software had been used to identify them.

Even the state attorney for Miami Dade County told the Post that “police had not informed her office about their use of facial recognition in the vast majority of cases.” She acknowledged that there were concerns about the technology’s accuracy and said, “You cannot rely on this for probable cause alone.”

Carlos J. Martinez, the county’s chief public defender, told the Post: “One of the basic tenets of our justice system is due process, is knowing what evidence there is against you and being able to challenge the evidence that’s against you.”

In response to the Post’s reporting, “Miami police and local prosecutors announced plans to revise their policies.” In the future police will be required to disclose their use of facial recognition.

The Post points out that as a result of a 1963 Supreme Court ruling, prosecutors in the US are actually required “to inform defendants about any information that would help prove their innocence, reduce their sentence, or hurt the credibility of a witness testifying against them.” If they fail to do so, the court could overturn a conviction “or even sanction the prosecutor.” But, the Post reports, US courts are not in agreement about whether these rules cover the use of facial recognition. (js)