And here's a 1st-person account from the arrested man: https://www.washingtonpost.com/opinions/2020/06/24/i-was-wro...
I'll be watching this case with great interest
What's hilarious is that it makes faces that look nothing like the original high-resolution images.
The practice of disclosing one's residence address to the state (for sale to data brokers[1] and accessible by stalkers and the like) when these kinds of abuses are happening is something that needs to stop. There's absolutely no reason that an ID should be gated on the state knowing your residence. It's none of their business. (It's not on a passport. Why is it on a driver's license?)
[1]: https://www.newsweek.com/dmv-drivers-license-data-database-i...
Essentially, an employee of the facial recognition provider forwarded an "investigative lead" for the match they generated (which does have a score associated with it on the provider's side, but it's not clear if the score is clearly communicated to detectives as well), and the detectives then put the photo of this man into a "6 pack" photo line-up, from which a store employee then identified that man as being the suspect.
Everyone involved will probably point fingers at each other, because the provider for example put large heading on their communication that, "this is not probable cause for an arrest, this is only an investigative lead, etc.", while the detectives will say well we got a hit from a line-up, blame the witness, and the witness would probably say well the detectives showed me a line-up and he seemed like the right guy (or maybe as is often the case with line-ups, the detectives can exert a huge amount of bias/influence over witnesses).
EDIT: Just to be clear, none of this is to say that the process worked well or that I condone this. I think the data, the technology, the processes, and the level of understanding on the side of the police are all insufficient, and I do not support how this played out, but I think it is easy enough to provide at least some pseudo-justification at each step along the way.
edit: looks like there's a text version of the article. I'm assuming this is a CMS issue: there's an audio story and a "print story", but the former hadn't been linked to the latter: https://news.ycombinator.com/item?id=23628790
Afterward, a couple people asked me to put together a list of the examples I cited in my talk. I'll be adding this to my list of examples:
* A hospital AI algorithm discriminating against black people when providing additional healthcare outreach by amplifying racism already in the system. https://www.nature.com/articles/d41586-019-03228-6
* Misdiagnosing people of African decent with genomic variants misclassified as pathogenic due to most of our reference data coming from European/white males. https://www.nejm.org/doi/full/10.1056/NEJMsa1507092
* The dangers of ML in diagnosing Melanoma exacerbating healthcare disparities for darker skinned people. https://jamanetwork.com/journals/jamadermatology/article-abs...
And some other relevant, but not healthcare examples as well:
* When Google's hate speech detecting AI inadvertantly censored anyone who used vernacular referred to in this article as being "African American English". https://fortune.com/2019/08/16/google-jigsaw-perspective-rac...
* When Amazon's AI recruiting tool inadvertantly filtered out resumes from women. https://www.reuters.com/article/us-amazon-com-jobs-automatio...
* When AI criminal risk prediction software used by judges in deciding the severity of punishment for those convicted predicts a higher chance of future offence for a young, black first time offender than for an older white repeat felon. https://www.propublica.org/article/machine-bias-risk-assessm...
And here's some good news though:
* A hospital used AI to enable care and cut costs (though the reporting seems to over simplify and gloss over enough to make the actual analysis of the results a little suspect). https://www.healthcareitnews.com/news/flagler-hospital-uses-...
I'm white. I grew up around a sea of white faces. Often when watching a movie filled with a cast of non-white faces, I will have trouble distinguishing one actor from another, especially if they are dressed similarly. This sometimes happens in movies with faces similar to the kinds I grew up surrounded by, but less so.
So unfortunately, yes, I probably do have more trouble distinguishing one black face from another vs one white face from another.
This is known as the cross-race effect and it's only something I became aware of in the last 5-10 years.
Add to that the fallibility of human memory, and I can't believe we still even use line ups. Are there any studies about how often line ups identify the wrong person?
The shoplifting incident occurred in October 2018 but it wasn’t until March 2019 that the police uploaded the security camera images to the state image-recognition system but the police still waited until the following January to arrest Williams. Unless there was something special about that date in October, there is no way for anyone to remember what they might have been doing on a particular day 15 months previously. Though, as it turns out, the NPR report states that the police did not even try to ascertain whether or not he had an alibi.
Also, after 15 months, there is virtually no chance that any eye-witness (such as the security guard who picked Williams out of a line-up) would be able to recall what the suspect looked like with any degree of certainty or accuracy.
This WUSF article [1] includes a photo of the actual “Investigative Lead Report” and the original image is far too dark for a anyone (human or algorithm) to recognise the person. It’s possible that the original is better quality and better detail can be discerned by applying image-processing filters – but it still looks like a very noisy source.
That same “Investigative Lead Report” also clearly states that “This document is not a positive identification … and is not probable cause to arrest. Further investigation is needed to develop probable cause of arrest”.
The New York Times article [2] states that this facial recognition technology that the Michigan tax-payer has paid millions of dollars for is known to be biased and that the vendors do “not formally measure the systems’ accuracy or bias”.
Finally, the original NPR article states that
> "Most of the time, people who are arrested using face recognition are not told face recognition was used to arrest them," said Jameson Spivack
[1] https://www.wusf.org/the-computer-got-it-wrong-how-facial-re...
[2] https://www.nytimes.com/2020/06/24/technology/facial-recogni...
The linked story is audio only and is associated with the Morning Edition broadcast, but the full story appears under our Special Series section.
(I work for NPR)
> The detective turned over the first piece of paper. It was a still image from a surveillance video, showing a heavyset man, dressed in black and wearing a red St. Louis Cardinals cap, standing in front of a watch display. Five timepieces, worth $3,800, were shoplifted.
> “Is this you?” asked the detective.
> The second piece of paper was a close-up. The photo was blurry, but it was clearly not Mr. Williams. He picked up the image and held it next to his face.
All the preceding grafs are told in the context of "this what Mr. Williams said happened", most explicitly this one:
> “When’s the last time you went to a Shinola store?” one of the detectives asked, in Mr. Williams’s recollection.
According to the ACLU complaint, the DPD and prosecutor have refused FOIA requests regarding the case:
https://www.aclu.org/letter/aclu-michigan-complaint-re-use-f...
> Yet DPD has failed entirely to respond to Mr. Williams’ FOIA request. The Wayne County Prosecutor also has not provided documents.
Facial recognition technology flagged 26 California lawmakers as criminals. (August 2019)
https://www.mercurynews.com/2019/08/14/facial-recognition-te...
"The Shinola shoplifting occurred in October 2018. Katherine Johnston, an investigator at Mackinac Partners, a loss prevention firm, reviewed the store’s surveillance video and sent a copy to the Detroit police"
"In this case, however, according to the Detroit police report, investigators simply included Mr. Williams’s picture in a “6-pack photo lineup” they created and showed to Ms. Johnston, Shinola’s loss-prevention contractor, and she identified him. (Ms. Johnston declined to comment.)"
This is the lead provided:
https://wfdd-live.s3.amazonaws.com/styles/story-full/s3/imag...
Note that it says in red and bold emphasis:
THIS DOCUMENT IS NOT A POSITIVE IDENTIFICATION. IT IS AN INVESTIGATIVE LEAD ONLY AND IS NOT PROBABLE CAUSE TO ARRREST. FURTHER INVESTIGATION IS NEEDED TO DEVELOP PROBABLE CAUSE TO ARREST.
> Authorities said he was not carrying identification at the time of his arrest and was not cooperating. … an issue with the fingerprint machine ultimately made it difficult to identify the suspect, … A source said officials used facial recognition technology to confirm his identity.
https://en.wikipedia.org/wiki/Capital_Gazette_shooting#Suspe...
> Police, who arrived at the scene within a minute of the reported gunfire, apprehended a gunman found hiding under a desk in the newsroom, according to the top official in Anne Arundel County, where the attack occurred.
https://www.washingtonpost.com/local/public-safety/heavy-pol...
This doesn't really seem like an awesome use of facial recognition to me. He was already in custody after getting picked up at the crime scene. I doubt he would have been released if facial recognition didn't exist.
https://www.automaticsync.com/captionsync/what-qualifies-as-... (see section: "High Quality Captioning")
I am not aware of many TV shows that offer audio commentary for the visually impaired.
Here is an example of one that does.
https://www.npr.org/2015/04/18/400590705/after-fan-pressure-...
Edit: one source says it is 400 million new cameras: https://www.cbc.ca/passionateeye/m_features/in-xinjiang-chin...
They don't:
https://wfdd-live.s3.amazonaws.com/styles/story-full/s3/imag...
There was further work involved, there was a witness who identified the man on a photo lineup, and so on. The AI did not identify anyone, it gave a "best effort" match. All the actual mistakes were made by humans.
[1] https://github.com/NVlabs/stylegan [2] https://arxiv.org/pdf/2003.03808.pdf (ctrl+f ffhq)
> "They never even asked him any questions before arresting him. They never asked him if he had an alibi. They never asked if he had a red Cardinals hat. They never asked him where he was that day," said lawyer Phil Mayor with the ACLU of Michigan.
When I was fired by an automated system, no one asked if I had done something wrong. They asked me to leave. If they had just checked his alibi, he would have been cleared. But the machine said it was him, so case closed.
Not too long ago, I wrote a comment here about this [1]:
> The trouble is not that the AI can be wrong, it's that we will rely on its answers to make decisions.
> When the facial recognition software combines your facial expression and your name, while you are walking under the bridge late at night, in an unfamiliar neighborhood, and you are black; your terrorist score is at 52%. A police car is dispatched.
Most of us here can be excited about Facial Recognition technology but still know that it's not something to be deployed in the field. It's by no means ready. We might even consider the moral ethics before building it as a toy.
But that's not how it is being sold to law enforcement or other entities. It's _Reduce crime in your cities. Catch criminals in ways never thought possible. Catch terrorists before they blow up anything._ It is sold as an ultimate decision maker.
There is clothing available that can confuse facial recognition systems. What would happen if, next time you go for your drivers license photo, you wore a T shirt designed to confuse facial recognition, for example like this one? https://www.redbubble.com/i/t-shirt/Anti-Surveillance-Clothi...
https://www.nature.com/articles/s41591-020-0942-0
Bit like with self driving cars - if it's not perfect we don't know how to integrate it with people
https://features.propublica.org/navy-accidents/uss-fitzgeral...
https://features.propublica.org/navy-uss-mccain-crash/navy-i...
Software allows us to work very efficiently because it can speed work up. It can speed us up when fucking things up just as well.
Brookings had a great post about this the other day: https://www.brookings.edu/blog/how-we-rise/2020/06/11/to-add...
If you survive violence at the hands of law enforcement and are not convicted of a crime, or if you don't and your family wants to hold law enforcement accountable, then the first option is to ask the local public prosecutor to pursue criminal charges against your attackers.
Depending on where you live could be a challenge, given the amount of institutional racial bias in the justice system, and how closely prosecutors tend to work with police departments. After all, if prosecutors were going after police brutality cases aggressively, there likely wouldn't be as much of a problem as there is.
If that's fruitless, you would need to seek the help of a civil rights attorney to push your case in the the legal system and/or the media. This is where a lot of higher profile cases like this end up - and often only because they were recorded on video.