The issue of face recognition algorithms performing worse on dark faces is a major problem. But the other side of it is: would police be more hesitant to act on such fuzzy evidence if the top match appeared to be a middle-class Caucasian (i.e. someone who is more likely to take legal recourse)?
What's hilarious is that it makes faces that look nothing like the original high-resolution images.
In this case, it's incumbent on the software vendors to ensure that less-than-certain results aren't even shown to the user. American police can't generally be trusted to understand nuance and/or do the right thing.
Yes.
> Does it return a list (sorted by confidence) of possible suspects,
Yes.
> ... or any other kind of feedback that would indicate even to a layperson how much uncertainty there is?
Yes it does. It also states in large print heading “THIS DOCUMENT IS NOT A POSITIVE IDENTIFICATION IT IS AN INVESTIGATIVE LEAD AND IS NOT PROBABLE CAUSE TO ARREST”.
You can see a picture of this in the ACLU article.
The police bungled this badly by setting up a fake photo lineup with the loss prevention clerk who submitted the report (who had only ever seen the same footage they had).
However, tools that are rife for misuse do not get a pass because they include a bold disclaimer. If the tool/process can not prevent misuse, the tool/process is broken and possibly dangerous.
That said, we have little data on how often the tool results in catching dangerous criminals versus how often it misidentifies innocent people. We have little data on if those innocent people tend to skew toward a particular demographic.
But I have a fair suspicion that dragnet techniques like this unfortunately can be both effective and also problematic.
Essentially, an employee of the facial recognition provider forwarded an "investigative lead" for the match they generated (which does have a score associated with it on the provider's side, but it's not clear if the score is clearly communicated to detectives as well), and the detectives then put the photo of this man into a "6 pack" photo line-up, from which a store employee then identified that man as being the suspect.
Everyone involved will probably point fingers at each other, because the provider for example put large heading on their communication that, "this is not probable cause for an arrest, this is only an investigative lead, etc.", while the detectives will say well we got a hit from a line-up, blame the witness, and the witness would probably say well the detectives showed me a line-up and he seemed like the right guy (or maybe as is often the case with line-ups, the detectives can exert a huge amount of bias/influence over witnesses).
EDIT: Just to be clear, none of this is to say that the process worked well or that I condone this. I think the data, the technology, the processes, and the level of understanding on the side of the police are all insufficient, and I do not support how this played out, but I think it is easy enough to provide at least some pseudo-justification at each step along the way.
I think there's genuine cause for concern here, especially if technologies like these are candidates for inclusion in any real law enforcement decision-making.
Presumably, the facial recognition software would provide an additional filter/sort. But at least in my situation, I could actually see how big the total pool of potential matches and thus have a sense of uncertainty about false positives, even if I were completely ignorant about the impact of false negatives (i.e. what if my suspect didn't live within x-miles of the scene, or wasn't a known/convicted felon?)
So the caution re: face recognition software is how it may non-transparently add confidence to this already very imperfect filtering process.
(in my case, the suspect was eventually found because he had committed a number of robberies, including being clearly caught on camera, and in an area/pattern that was easy to narrow down where he operated)
This is absurdly dangerously. The AI will find people who look like the suspect, that’s how the technology works. A lineup as evidence will almost guarantee a bad outcome, because of course the man looks like the suspect!
Honest question: does race predict legal recourse when decoupled from socioeconomic status, or is this an assumption?
I think the issue is that regardless of the answer, it isn't decoupled in real world scenarios.
I think the solution isn't dependent upon race either. It is to ensure everyone have access to legal recourse regardless of socioeconomic status. This would have the side effect of benefiting races correlated with lower socioeconomic status more.
> The police bungled this badly by setting up a fake photo lineup...*
FWIW, this process is similar to traditional police lineups. The witness is shown 4-6 people – one who is the actual suspect, and several that vaguely match a description of the suspect. When I was asked to identify a suspect in my robbery, the lineup included an assistant attorney who would later end up prosecuting the case. The police had to go out and find tall slight-skinned men to round out the lineup.
> ... with the loss prevention clerk who submitted the report (who had only ever seen the same footage they had).
Yeah, I would hope that this is not standard process. The lineup process is already imperfect and flawed as it is even with a witness who at least saw the crime first-hand.
Presumably the coupling of the variables is not binary (dependent or independent) but variable (degrees of coupling). Presumably these variables were more tightly coupled in the past than in the present. Presumably it's useful to understand precisely how coupled these variables are today because it would drive our approach to addressing these disparities. E.g., if the variables are loosely coupled then bias-reducing programs would have a marginal impact on the disparities and the better investment would be social welfare programs (and the inverse is true if the variables are tightly coupled).
Did you think I was asking about non-real-world scenarios? And how do we know that it's coupled (or rather, the degree to which it's coupled) in real world scenarios?
> I think the solution isn't dependent upon race either. It is to ensure everyone have access to legal recourse regardless of socioeconomic status. This would have the side effect of benefiting races correlated with lower socioeconomic status more.
This makes sense to me, although I don't know what this looks like in practice.
This needs to be coupled with the truth that people (police) without diverse racial exposure are terrible at identifying people outside of their ethnicity. In the photo/text article they show the top of the "Investigative Lead Report" as an image. You mean to say that every cop who saw the two images side by side did not stop and say "hey, these are not the same person!" They did not, and that's because their own brains' could not see the difference.
This is a major reason police forces need to be ethnically diverse. Just that enables those members of the force who never grew up or spent time outside their ethnicity can learn to tell a diverse range of similar but different people outside their ethnicity apart.
But for a photo lineup I can't imagine why you don't have least 25 photos to pick from.
> The detective turned over the first piece of paper. It was a still image from a surveillance video, showing a heavyset man, dressed in black and wearing a red St. Louis Cardinals cap, standing in front of a watch display. Five timepieces, worth $3,800, were shoplifted.
> “Is this you?” asked the detective.
> The second piece of paper was a close-up. The photo was blurry, but it was clearly not Mr. Williams. He picked up the image and held it next to his face.
All the preceding grafs are told in the context of "this what Mr. Williams said happened", most explicitly this one:
> “When’s the last time you went to a Shinola store?” one of the detectives asked, in Mr. Williams’s recollection.
According to the ACLU complaint, the DPD and prosecutor have refused FOIA requests regarding the case:
https://www.aclu.org/letter/aclu-michigan-complaint-re-use-f...
> Yet DPD has failed entirely to respond to Mr. Williams’ FOIA request. The Wayne County Prosecutor also has not provided documents.
This arrest happened 6 months ago. Who else besides the suspect and the police do you believe reporters should ask for "basic corroboration" of events that took place inside a police station? Or do you think this story shouldn't be reported on at all until the police agree to give additional info?
This is the lead provided:
https://wfdd-live.s3.amazonaws.com/styles/story-full/s3/imag...
Note that it says in red and bold emphasis:
THIS DOCUMENT IS NOT A POSITIVE IDENTIFICATION. IT IS AN INVESTIGATIVE LEAD ONLY AND IS NOT PROBABLE CAUSE TO ARRREST. FURTHER INVESTIGATION IS NEEDED TO DEVELOP PROBABLE CAUSE TO ARREST.
The real negligence here is whoever tuned the software to spit out a result for that quality of image rather than a "not enough data, too many matches, please submit a better image" error.
The deeper reform that needs to happen here is that every person falsely arrested and/or prosecuted needs to be automatically compensated for their time wasted and other harm suffered. Only then will police departments have some incentive for restraint. Currently we have a perverse reverse lottery where if you're unlucky you just lose a day/month/year of your life. With the state of what we're actually protesting I'm not holding my breath (eg the privileged criminals who committed the first degree murder of Breonna Taylor still have yet to be charged), but it's still worth calling out the smaller injustices that criminal "justice" system inflicts.
They are in the business of exposing you to as many paid ads as possible. And they believe providing outgoing links reduces their ability to do that.
NPR is a non-profit that is mostly funded by donations. They only have minimal paid ads on their website to pay for running costs - they could easily optimize the news pages to increase ad revenue but they don't because it would get in the way of their goals.
I agree here, but doing that may lead to the prosecutors trying extra hard to find something to charge a person with after they are arrested, even if it was something trivial that would often go un-prosecuted.
Getting the details right seems tough, but doable.
This is not correct. The "6-pack" was shown to a security firm's employee, who had viewed the store camera's tape.
"In this case, however, according to the Detroit police report, investigators simply included Mr. Williams’s picture in a “6-pack photo lineup” they created and showed to Ms. Johnston, Shinola’s loss-prevention contractor, and she identified him." [1]
[1] ibid.
[1] https://github.com/NVlabs/stylegan [2] https://arxiv.org/pdf/2003.03808.pdf (ctrl+f ffhq)
The 4th sentence says: "Detectives zoomed in on the grainy footage..."
that's what happens if you're lucky
How did the people in the 6 pack photo line-up match up against the facial recognition? Were they likely matches?
So unequal treatment based on race has quite literally been a feature of the US justice system, independent of socioeconomic status.
If you survive violence at the hands of law enforcement and are not convicted of a crime, or if you don't and your family wants to hold law enforcement accountable, then the first option is to ask the local public prosecutor to pursue criminal charges against your attackers.
Depending on where you live could be a challenge, given the amount of institutional racial bias in the justice system, and how closely prosecutors tend to work with police departments. After all, if prosecutors were going after police brutality cases aggressively, there likely wouldn't be as much of a problem as there is.
If that's fruitless, you would need to seek the help of a civil rights attorney to push your case in the the legal system and/or the media. This is where a lot of higher profile cases like this end up - and often only because they were recorded on video.