zlacker

Facial Recognition Leads To False Arrest Of Black Man In Detroit

submitted by vermon+(OP) on 2020-06-24 14:43:43 | 661 points 279 comments
[view article] [source] [go to bottom]

NOTE: showing posts with links only show all posts
2. danso+21[view] [source] 2020-06-24 14:49:36
>>vermon+(OP)
Since the NPR is a 3 minute listen without a transcript, here's the ACLU's text/image article: https://www.aclu.org/news/privacy-technology/wrongfully-arre...

And here's a 1st-person account from the arrested man: https://www.washingtonpost.com/opinions/2020/06/24/i-was-wro...

15. zro+R2[view] [source] 2020-06-24 14:59:29
>>vermon+(OP)
NPR article about the same, if you prefer to read instead of listen: https://www.npr.org/2020/06/24/882683463/the-computer-got-it...

I'll be watching this case with great interest

◧◩
17. Pxtl+23[view] [source] [discussion] 2020-06-24 15:00:13
>>danso+02
Intresting and related, a team made a neat "face depixelizer" that takes a pixelated image and uses machine learning to generate a face that should match the pixelated image.

What's hilarious is that it makes faces that look nothing like the original high-resolution images.

https://twitter.com/Chicken3gg/status/1274314622447820801

◧◩
34. dx87+e5[view] [source] [discussion] 2020-06-24 15:09:23
>>hpoe+V1
Yeah, facial recognition can be useful in law enforcement, as long as it's used responsibly. There was a man who shot people at a newspaper where I lived, and when apprehended, he refused to identify himself, and apparently their fingerprint machine wasn't working, so they used facial recognition to identify him.

https://en.wikipedia.org/wiki/Capital_Gazette_shooting

37. sneak+o5[view] [source] 2020-06-24 15:09:51
>>vermon+(OP)
Another reason that it's absolutely insane that the state demands to know where you sleep at night in a free society. These clowns were able to just show up at his house and kidnap him.

The practice of disclosing one's residence address to the state (for sale to data brokers[1] and accessible by stalkers and the like) when these kinds of abuses are happening is something that needs to stop. There's absolutely no reason that an ID should be gated on the state knowing your residence. It's none of their business. (It's not on a passport. Why is it on a driver's license?)

[1]: https://www.newsweek.com/dmv-drivers-license-data-database-i...

◧◩
43. Fivepl+o6[view] [source] [discussion] 2020-06-24 15:13:55
>>danso+21
NPR's text-only article served to me:

https://text.npr.org/s.php?sId=882683463

◧◩◪
44. danso+v6[view] [source] [discussion] 2020-06-24 15:14:38
>>Pxtl+23
What's sad is that a tech entrepreneur will definitely add that feature and sell it to law enforcement agencies that believe in CSI magic: https://www.youtube.com/watch?v=Vxq9yj2pVWk
◧◩
52. strgcm+w7[view] [source] [discussion] 2020-06-24 15:18:43
>>danso+02
I think the NYT article has a little more detail: https://www.nytimes.com/2020/06/24/technology/facial-recogni...

Essentially, an employee of the facial recognition provider forwarded an "investigative lead" for the match they generated (which does have a score associated with it on the provider's side, but it's not clear if the score is clearly communicated to detectives as well), and the detectives then put the photo of this man into a "6 pack" photo line-up, from which a store employee then identified that man as being the suspect.

Everyone involved will probably point fingers at each other, because the provider for example put large heading on their communication that, "this is not probable cause for an arrest, this is only an investigative lead, etc.", while the detectives will say well we got a hit from a line-up, blame the witness, and the witness would probably say well the detectives showed me a line-up and he seemed like the right guy (or maybe as is often the case with line-ups, the detectives can exert a huge amount of bias/influence over witnesses).

EDIT: Just to be clear, none of this is to say that the process worked well or that I condone this. I think the data, the technology, the processes, and the level of understanding on the side of the police are all insufficient, and I do not support how this played out, but I think it is easy enough to provide at least some pseudo-justification at each step along the way.

◧◩◪
56. danso+j9[view] [source] [discussion] 2020-06-24 15:26:20
>>TedDoe+36
NPR does transcribe (many, most?) its audio stories, but usually there's a delay of a day or so – the published timestamp for this story is 5:06AM (ET) today.

edit: looks like there's a text version of the article. I'm assuming this is a CMS issue: there's an audio story and a "print story", but the former hadn't been linked to the latter: https://news.ycombinator.com/item?id=23628790

◧◩
57. JangoS+0a[view] [source] [discussion] 2020-06-24 15:29:05
>>vmcept+p4
This is so incredibly common, it's embarrassing. I was on an expert panel about "AI and Machine Learning in Healthcare and Life Sciences" back in January, and I made it a point throughout my discussions to keep emphasizing the amount of bias inherent in our current systems, which ends up getting amplified and codified in machine learning systems. Worse yet, it ends up justifying the bias based on the false pretense that the systems built are objective and the data doesn't lie.

Afterward, a couple people asked me to put together a list of the examples I cited in my talk. I'll be adding this to my list of examples:

* A hospital AI algorithm discriminating against black people when providing additional healthcare outreach by amplifying racism already in the system. https://www.nature.com/articles/d41586-019-03228-6

* Misdiagnosing people of African decent with genomic variants misclassified as pathogenic due to most of our reference data coming from European/white males. https://www.nejm.org/doi/full/10.1056/NEJMsa1507092

* The dangers of ML in diagnosing Melanoma exacerbating healthcare disparities for darker skinned people. https://jamanetwork.com/journals/jamadermatology/article-abs...

And some other relevant, but not healthcare examples as well:

* When Google's hate speech detecting AI inadvertantly censored anyone who used vernacular referred to in this article as being "African American English". https://fortune.com/2019/08/16/google-jigsaw-perspective-rac...

* When Amazon's AI recruiting tool inadvertantly filtered out resumes from women. https://www.reuters.com/article/us-amazon-com-jobs-automatio...

* When AI criminal risk prediction software used by judges in deciding the severity of punishment for those convicted predicts a higher chance of future offence for a young, black first time offender than for an older white repeat felon. https://www.propublica.org/article/machine-bias-risk-assessm...

And here's some good news though:

* A hospital used AI to enable care and cut costs (though the reporting seems to over simplify and gloss over enough to make the actual analysis of the results a little suspect). https://www.healthcareitnews.com/news/flagler-hospital-uses-...

59. paulor+gb[view] [source] 2020-06-24 15:34:00
>>vermon+(OP)
I've been thinking this sort of event has become inevitable. Tech development and business models support extending the environments in which we collect images and analyze them. Confidence values lead to statistical guilt. I wrote about it here if interested: https://unintendedconsequenc.es/inevitable-surveillance/
66. mister+1f[view] [source] 2020-06-24 15:48:00
>>vermon+(OP)
relevant: https://www.theregister.com/2020/06/24/face_criminal_ai/
76. js2+lj[view] [source] 2020-06-24 16:04:22
>>vermon+(OP)
> "I picked it up and held it to my face and told him, 'I hope you don't think all Black people look alike,' " Williams said.

I'm white. I grew up around a sea of white faces. Often when watching a movie filled with a cast of non-white faces, I will have trouble distinguishing one actor from another, especially if they are dressed similarly. This sometimes happens in movies with faces similar to the kinds I grew up surrounded by, but less so.

So unfortunately, yes, I probably do have more trouble distinguishing one black face from another vs one white face from another.

This is known as the cross-race effect and it's only something I became aware of in the last 5-10 years.

Add to that the fallibility of human memory, and I can't believe we still even use line ups. Are there any studies about how often line ups identify the wrong person?

https://en.wikipedia.org/wiki/Cross-race_effect

77. Anthon+wj[view] [source] 2020-06-24 16:04:57
>>vermon+(OP)
There is just so much wrong with this story. For starters:

The shoplifting incident occurred in October 2018 but it wasn’t until March 2019 that the police uploaded the security camera images to the state image-recognition system but the police still waited until the following January to arrest Williams. Unless there was something special about that date in October, there is no way for anyone to remember what they might have been doing on a particular day 15 months previously. Though, as it turns out, the NPR report states that the police did not even try to ascertain whether or not he had an alibi.

Also, after 15 months, there is virtually no chance that any eye-witness (such as the security guard who picked Williams out of a line-up) would be able to recall what the suspect looked like with any degree of certainty or accuracy.

This WUSF article [1] includes a photo of the actual “Investigative Lead Report” and the original image is far too dark for a anyone (human or algorithm) to recognise the person. It’s possible that the original is better quality and better detail can be discerned by applying image-processing filters – but it still looks like a very noisy source.

That same “Investigative Lead Report” also clearly states that “This document is not a positive identification … and is not probable cause to arrest. Further investigation is needed to develop probable cause of arrest”.

The New York Times article [2] states that this facial recognition technology that the Michigan tax-payer has paid millions of dollars for is known to be biased and that the vendors do “not formally measure the systems’ accuracy or bias”.

Finally, the original NPR article states that

> "Most of the time, people who are arrested using face recognition are not told face recognition was used to arrest them," said Jameson Spivack

[1] https://www.wusf.org/the-computer-got-it-wrong-how-facial-re...

[2] https://www.nytimes.com/2020/06/24/technology/facial-recogni...

◧◩
85. milesp+Yu[view] [source] [discussion] 2020-06-24 16:41:31
>>danso+21
The mods can change this link to https://www.npr.org/2020/06/24/882683463/the-computer-got-it...

The linked story is audio only and is associated with the Morning Edition broadcast, but the full story appears under our Special Series section.

(I work for NPR)

◧◩◪◨⬒⬓
94. danso+Ry[view] [source] [discussion] 2020-06-24 16:54:58
>>treis+vs
I don't know what passage you're describing, but this one is implied to be part of a narrative that is told from the perspective of Mr. Williams, i.e. he's the one who remembers "The photo was blurry, but it was clearly not Mr. Williams"

> The detective turned over the first piece of paper. It was a still image from a surveillance video, showing a heavyset man, dressed in black and wearing a red St. Louis Cardinals cap, standing in front of a watch display. Five timepieces, worth $3,800, were shoplifted.

> “Is this you?” asked the detective.

> The second piece of paper was a close-up. The photo was blurry, but it was clearly not Mr. Williams. He picked up the image and held it next to his face.

All the preceding grafs are told in the context of "this what Mr. Williams said happened", most explicitly this one:

> “When’s the last time you went to a Shinola store?” one of the detectives asked, in Mr. Williams’s recollection.

According to the ACLU complaint, the DPD and prosecutor have refused FOIA requests regarding the case:

https://www.aclu.org/letter/aclu-michigan-complaint-re-use-f...

> Yet DPD has failed entirely to respond to Mr. Williams’ FOIA request. The Wayne County Prosecutor also has not provided documents.

102. seebet+DE[view] [source] 2020-06-24 17:19:53
>>vermon+(OP)
Reminds me of this-

Facial recognition technology flagged 26 California lawmakers as criminals. (August 2019)

https://www.mercurynews.com/2019/08/14/facial-recognition-te...

◧◩
104. busine+vF[view] [source] [discussion] 2020-06-24 17:23:38
>>ghostp+g5
It is not clear to me that the person who identified him was shop owner or clerk. From the nyt article: https://www.nytimes.com/2020/06/24/technology/facial-recogni...

"The Shinola shoplifting occurred in October 2018. Katherine Johnston, an investigator at Mackinac Partners, a loss prevention firm, reviewed the store’s surveillance video and sent a copy to the Detroit police"

"In this case, however, according to the Detroit police report, investigators simply included Mr. Williams’s picture in a “6-pack photo lineup” they created and showed to Ms. Johnston, Shinola’s loss-prevention contractor, and she identified him. (Ms. Johnston declined to comment.)"

◧◩◪
107. gridlo+FH[view] [source] [discussion] 2020-06-24 17:33:18
>>strgcm+w7
> Essentially, an employee of the facial recognition provider forwarded an "investigative lead" for the match they generated (which does have a score associated with it on the provider's side, but it's not clear if the score is clearly communicated to detectives as well)

This is the lead provided:

https://wfdd-live.s3.amazonaws.com/styles/story-full/s3/imag...

Note that it says in red and bold emphasis:

THIS DOCUMENT IS NOT A POSITIVE IDENTIFICATION. IT IS AN INVESTIGATIVE LEAD ONLY AND IS NOT PROBABLE CAUSE TO ARRREST. FURTHER INVESTIGATION IS NEEDED TO DEVELOP PROBABLE CAUSE TO ARREST.

◧◩◪
119. rovolo+oZ[view] [source] [discussion] 2020-06-24 18:49:24
>>dx87+e5
From the wiki article and the linked news articles, the police picked him up at the scene of the crime. He also had smoke grenades (used in the attack) when they found him.

> Authorities said he was not carrying identification at the time of his arrest and was not cooperating. … an issue with the fingerprint machine ultimately made it difficult to identify the suspect, … A source said officials used facial recognition technology to confirm his identity.

https://en.wikipedia.org/wiki/Capital_Gazette_shooting#Suspe...

> Police, who arrived at the scene within a minute of the reported gunfire, apprehended a gunman found hiding under a desk in the newsroom, according to the top official in Anne Arundel County, where the attack occurred.

https://www.washingtonpost.com/local/public-safety/heavy-pol...

This doesn't really seem like an awesome use of facial recognition to me. He was already in custody after getting picked up at the crime scene. I doubt he would have been released if facial recognition didn't exist.

◧◩◪◨⬒⬓
121. dvtrn+C01[view] [source] [discussion] 2020-06-24 18:54:39
>>scarfa+mE
More than zero. It's called closed captioning, isn't it? I've quite often seen closed-captioning that put brief written descriptions of non-verbal depictions in bracket, and it's not entirely common either

https://www.automaticsync.com/captionsync/what-qualifies-as-... (see section: "High Quality Captioning")

◧◩◪◨⬒⬓⬔
122. scarfa+a11[view] [source] [discussion] 2020-06-24 18:58:15
>>dvtrn+C01
Close captioning is for people who can’t hear.

I am not aware of many TV shows that offer audio commentary for the visually impaired.

Here is an example of one that does.

https://www.npr.org/2015/04/18/400590705/after-fan-pressure-...

123. gentle+131[view] [source] 2020-06-24 19:06:18
>>vermon+(OP)
The discussion about this tech revolves around accuracy and racism, but the real threat is in global unlimited surveillance. China is installing 200 million of facial recognition cameras right now to keep the population under control. It might be the death of human freedom as this technology spreads

Edit: one source says it is 400 million new cameras: https://www.cbc.ca/passionateeye/m_features/in-xinjiang-chin...

◧◩◪
141. dang+Xu1[view] [source] [discussion] 2020-06-24 21:36:19
>>milesp+Yu
Ok, changed from https://www.npr.org/2020/06/24/882678392/man-says-he-was-fal.... Thanks!
◧◩◪◨⬒
142. gridlo+gv1[view] [source] [discussion] 2020-06-24 21:38:40
>>jacque+Hq1
> Faces generated by AI means should not count as 'probable cause' to go and arrest people.

They don't:

https://wfdd-live.s3.amazonaws.com/styles/story-full/s3/imag...

There was further work involved, there was a witness who identified the man on a photo lineup, and so on. The AI did not identify anyone, it gave a "best effort" match. All the actual mistakes were made by humans.

◧◩◪◨
148. jcims+sA1[view] [source] [discussion] 2020-06-24 22:18:01
>>mywitt+s3
One of the underlying models, PULSE, was trained on CelebAHQ, which is likely what the results are mostly white-looking. StyleGAN, which was trained on the much more diverse (but sparse) FFHQ dataset does come up with a much more diverse set of faces[1]...but PULSE couldn't get them to converge very closely on the pixelated subjects...so they went with CelebA [2].

[1] https://github.com/NVlabs/stylegan [2] https://arxiv.org/pdf/2003.03808.pdf (ctrl+f ffhq)

161. ibudia+bP1[view] [source] 2020-06-25 00:21:16
>>vermon+(OP)
Here is a part that I personally have to wrestle with:

> "They never even asked him any questions before arresting him. They never asked him if he had an alibi. They never asked if he had a red Cardinals hat. They never asked him where he was that day," said lawyer Phil Mayor with the ACLU of Michigan.

When I was fired by an automated system, no one asked if I had done something wrong. They asked me to leave. If they had just checked his alibi, he would have been cleared. But the machine said it was him, so case closed.

Not too long ago, I wrote a comment here about this [1]:

> The trouble is not that the AI can be wrong, it's that we will rely on its answers to make decisions.

> When the facial recognition software combines your facial expression and your name, while you are walking under the bridge late at night, in an unfamiliar neighborhood, and you are black; your terrorist score is at 52%. A police car is dispatched.

Most of us here can be excited about Facial Recognition technology but still know that it's not something to be deployed in the field. It's by no means ready. We might even consider the moral ethics before building it as a toy.

But that's not how it is being sold to law enforcement or other entities. It's _Reduce crime in your cities. Catch criminals in ways never thought possible. Catch terrorists before they blow up anything._ It is sold as an ultimate decision maker.

[1]:https://news.ycombinator.com/item?id=21339530

◧◩◪
172. ibudia+bW1[view] [source] [discussion] 2020-06-25 01:23:31
>>ineeda+OT1
We had a whole discussion about it here a couple years ago: https://news.ycombinator.com/item?id=17350645
180. aussie+q02[view] [source] 2020-06-25 02:05:54
>>vermon+(OP)
In alot of police departments around the world, the photo database used is the drivers license database.

There is clothing available that can confuse facial recognition systems. What would happen if, next time you go for your drivers license photo, you wore a T shirt designed to confuse facial recognition, for example like this one? https://www.redbubble.com/i/t-shirt/Anti-Surveillance-Clothi...

◧◩
216. yread+Wk2[view] [source] [discussion] 2020-06-25 06:07:09
>>ibudia+bP1
You see it everywhere with AI and other tools. We overly trust them. Even when doctors have a high confidence in their diagnosis, they accept wrong AI-recommended conclusion that contradicts it.

https://www.nature.com/articles/s41591-020-0942-0

Bit like with self driving cars - if it's not perfect we don't know how to integrate it with people

◧◩
221. jml7c5+Nq2[view] [source] [discussion] 2020-06-25 07:13:58
>>mnw21c+Zu
In the legal world it apparently goes by the moniker "Prosecutor's Fallacy":

https://en.wikipedia.org/wiki/Prosecutor%27s_fallacy

◧◩◪◨⬒⬓
222. fivre+9r2[view] [source] [discussion] 2020-06-25 07:18:58
>>dzhiur+Fe2
Were you asleep for all coverage of the 737 MAX MCAS, or the technical failures that contributed to multiple warships casually driving into other ships?

https://features.propublica.org/navy-accidents/uss-fitzgeral...

https://features.propublica.org/navy-uss-mccain-crash/navy-i...

Software allows us to work very efficiently because it can speed work up. It can speed us up when fucking things up just as well.

◧◩◪
223. octodo+9s2[view] [source] [discussion] 2020-06-25 07:28:32
>>cwkoss+iS1
It gets even crazier to think about when you realise that cities like Detroit are overwhelmingly Black. The police there are just not providing good value for the people who live there.

Brookings had a great post about this the other day: https://www.brookings.edu/blog/how-we-rise/2020/06/11/to-add...

◧◩◪◨⬒
262. danans+E64[view] [source] [discussion] 2020-06-25 18:56:55
>>throwa+Ij3
Once you are convicted, and are subject to one of the disproportionate sentences often given to black people, nothing short of a major change to how sentencing law works can provide legal recourse. See: https://www.sentencingproject.org/issues/racial-disparity/

If you survive violence at the hands of law enforcement and are not convicted of a crime, or if you don't and your family wants to hold law enforcement accountable, then the first option is to ask the local public prosecutor to pursue criminal charges against your attackers.

Depending on where you live could be a challenge, given the amount of institutional racial bias in the justice system, and how closely prosecutors tend to work with police departments. After all, if prosecutors were going after police brutality cases aggressively, there likely wouldn't be as much of a problem as there is.

If that's fruitless, you would need to seek the help of a civil rights attorney to push your case in the the legal system and/or the media. This is where a lot of higher profile cases like this end up - and often only because they were recorded on video.

◧◩
264. spappa+Ht4[view] [source] [discussion] 2020-06-25 20:56:51
>>mnw21c+Zu
See also: the paper "Why Most Published Research Findings Are False". Previously on HN: https://news.ycombinator.com/item?id=1825007
◧◩◪◨⬒⬓⬔⧯▣
271. jacobu+m16[view] [source] [discussion] 2020-06-26 13:08:50
>>strong+pY5
https://en.m.wikipedia.org/wiki/Shooting_of_John_Crawford_II...
◧◩◪◨⬒⬓⬔⧯▣▦▧
273. jacobu+Sb6[view] [source] [discussion] 2020-06-26 14:16:21
>>strong+596
https://www.pnas.org/content/116/34/16793/tab-figures-data
[go to top]