And here's a 1st-person account from the arrested man: https://www.washingtonpost.com/opinions/2020/06/24/i-was-wro...
It would be a great result if a court declared that the use of racially biased facial recognition software is a violation of the 14th amendment violation, and enjoined PDs from using such software unless it can be demonstrated to be free of racial bias.
Fortunately, that is changing, but not that quickly.
Of course really I blame the AI/ML hucksters for part of this mess who have sold us the idea of machines replacing rather than augmenting human decision making.
The issue of face recognition algorithms performing worse on dark faces is a major problem. But the other side of it is: would police be more hesitant to act on such fuzzy evidence if the top match appeared to be a middle-class Caucasian (i.e. someone who is more likely to take legal recourse)?
If only their elected officials would listen to them...
I'll be watching this case with great interest
What's hilarious is that it makes faces that look nothing like the original high-resolution images.
Facial recognition produces potential matches. It's still up to humans to look at footage themselves and use their judgment as to whether it's actually the same person or not, as well as to judge whether other elements fit the suspect or not.
The problem here is 100% on the cop(s) who made that call for themselves, or intentionally ignored obvious differences. (Of course, without us seeing the actual images in question, it's hard to judge.)
There are plenty of dangers with facial recognition (like using it at scale, or to track people without accountability), but this one doesn't seem to be it.
A computer can make a mistake across literally any person who has a publicly available photo (which is almost everyone).
Also, the facial recognition technologies are provably extremely racially biased.
Third, Robert’s arrest demonstrates why claims that face recognition isn’t dangerous are far-removed from reality. Law enforcement has claimed that face recognition technology is only used as an investigative lead and not as the sole basis for arrest. But once the technology falsely identified Robert, there was no real investigation.
I fear this is going to be the norm among police investigations.
Maybe he can go after the makers of the facial recognition software, but they can probably point their finger at the cops for using it wrong.
So, in any case, the guy will be left with a big legal bill.
In this case, it's incumbent on the software vendors to ensure that less-than-certain results aren't even shown to the user. American police can't generally be trusted to understand nuance and/or do the right thing.
I still think it insane. We have falling crime rates and we still arm ourselves as fast as we can. Humanity could live without face recognition and we wouldn't even suffer any penalties. Nope, people need to sell their evidently shitty ML work.
I disagree. There is plenty of blame on the cops who made that call for themselves, true.
But there doesn't have to be a single party who is at fault. The facial recognition software is badly flawed in this dimension. It's well established that the current technologies are racially biased. So there's at least some fault in the developer of that technology, and the purchasing officer at the police department, and a criminal justice system that allows it to be used that way.
Reducing a complex problem to a single at-fault person produces an analysis that will often let other issues continue to fester. Consider if the FAA always stopped the analysis of air-crashes at: "the pilot made an error, so we won't take any other corrective actions other than punishing the pilot". Air travel wouldn't nearly as safe as it is today.
While we should hold these officers responsible for their mistake (abolish QI so that these officers could be sued civilly for the wrongful arrest!), we should also fix the other parts of the system that are obviously broken.
The idea behind inclusion is that this product would have never made it to production if the engineering teams, product team, executive team and board members represented the population. But enough representation so that there is a countering voice is even better.
Would have just been "this edge case is not an edge case at all, axe it."
Accurately addressing a market is the point of the corporation more than an illusion of meritocracy amongst the employees.
Yes, of course it is. Orders of magnitude more people could be negatively and undeservedly affected this for no other reason than the fact that it's now cheap enough and easy enough to use by the authorities.
Just to give one example I came up with right now, in the future the police could stop you, take your picture and automatically have it go through its facial recognition database. Kind of like "stop and scan".
Or if the street cameras get powerful enough (and they will), they could take your picture automatically while driving and then stop you.
Think of it like a "TSA system for the roads". A lot more people will be "randomly picked" by these systems from the roads.
But no, its AI, its magical and it must be right.
The store clerk (who hadn't witnessed the crime and was going off the same frame of video fed into the facial recognition software) said the driver's license photo was a match.
There are several problems with the conduct of the police in this story but IMHO the use of facial recognition is not the most egregious.
The practice of disclosing one's residence address to the state (for sale to data brokers[1] and accessible by stalkers and the like) when these kinds of abuses are happening is something that needs to stop. There's absolutely no reason that an ID should be gated on the state knowing your residence. It's none of their business. (It's not on a passport. Why is it on a driver's license?)
[1]: https://www.newsweek.com/dmv-drivers-license-data-database-i...
Who decided to use this software for this purpose, despite these bad flaws and well established bias? The buck stops with the cops.
It's also poor practice to search a database using a photo or even DNA to go fishing for a suspect. A closest match will generally be found even if the actual perpetrator isn't in the database. I think on some level the authorities know this, which is why they dont seed the databases with their own photos and DNA.
I just did. 3 minutes wasn't that bad and I wasn't somewhere where it would be a problem.
> Why do sites do this?
NPR is a radio network. I have seen that often they do transcribe their clips. I am not sure what the process they have for that looks like, but it seems this particular clip didn't get transcribed.
Edit: looks like they do have a transcription mentioned elsewhere in the thread. So seems like some kind of UI fail.
This is the story that gets attention though. Despite it representing an improvement in likely every potential metic you can measure.
The response is what is interesting to me. It triggers a 1984 reflex resulting in people attempting to reject a dramatic enchantment in law enforcement ostensibly because it is not perfect. Or because they believe it a threat to privacy. I think people who are rejecting it should dig deep into their assumptions and reasoning to examine why they are really opposed to technology like this.
I don't think anybody actually believes that.
I'm pretty sure the exact opposite is true: People expect AI to fail, because they see it fail all the time in their daily use of computers, for example in voice recognition.
> Worse, its reported confidence for an individual face may be grossly overstated, since that is based on all the data it was trained on, rather than the particular subset you may be dealing with.
At the end of the day, this is still human error. A human compared the faces and decided they looked alike enough to go ahead. The whole thing could've happened without AI, it's just that without AI, processing large volumes of data is infeasible.
The justification for depriving someone of their liberty lies solely with the arresting officer. They can base that on whatever they want, as long as they can later justify it to a court.
For example, you might have a trusted informant who could tell you who committed a local burglary, just this on its own could be legitimate grounds to make an arrest. The same informant might walk into a police station and tell the same information to someone else, for that officer, it might not be sufficient to justify an arrest.
Stop using Amazon Ring and similar doorbell products.
Yes.
> Does it return a list (sorted by confidence) of possible suspects,
Yes.
> ... or any other kind of feedback that would indicate even to a layperson how much uncertainty there is?
Yes it does. It also states in large print heading “THIS DOCUMENT IS NOT A POSITIVE IDENTIFICATION IT IS AN INVESTIGATIVE LEAD AND IS NOT PROBABLE CAUSE TO ARREST”.
You can see a picture of this in the ACLU article.
The police bungled this badly by setting up a fake photo lineup with the loss prevention clerk who submitted the report (who had only ever seen the same footage they had).
However, tools that are rife for misuse do not get a pass because they include a bold disclaimer. If the tool/process can not prevent misuse, the tool/process is broken and possibly dangerous.
That said, we have little data on how often the tool results in catching dangerous criminals versus how often it misidentifies innocent people. We have little data on if those innocent people tend to skew toward a particular demographic.
But I have a fair suspicion that dragnet techniques like this unfortunately can be both effective and also problematic.
Essentially, an employee of the facial recognition provider forwarded an "investigative lead" for the match they generated (which does have a score associated with it on the provider's side, but it's not clear if the score is clearly communicated to detectives as well), and the detectives then put the photo of this man into a "6 pack" photo line-up, from which a store employee then identified that man as being the suspect.
Everyone involved will probably point fingers at each other, because the provider for example put large heading on their communication that, "this is not probable cause for an arrest, this is only an investigative lead, etc.", while the detectives will say well we got a hit from a line-up, blame the witness, and the witness would probably say well the detectives showed me a line-up and he seemed like the right guy (or maybe as is often the case with line-ups, the detectives can exert a huge amount of bias/influence over witnesses).
EDIT: Just to be clear, none of this is to say that the process worked well or that I condone this. I think the data, the technology, the processes, and the level of understanding on the side of the police are all insufficient, and I do not support how this played out, but I think it is easy enough to provide at least some pseudo-justification at each step along the way.
I think there's genuine cause for concern here, especially if technologies like these are candidates for inclusion in any real law enforcement decision-making.
At what point can we decide that people in positions of power are not and will not ever be responsible enough to handle this technology?
Surely as a society we shouldn’t continue to naively assume that police are “responsible” like we’ve assumed in the past?
Plenty of blame to go around.
edit: looks like there's a text version of the article. I'm assuming this is a CMS issue: there's an audio story and a "print story", but the former hadn't been linked to the latter: https://news.ycombinator.com/item?id=23628790
Afterward, a couple people asked me to put together a list of the examples I cited in my talk. I'll be adding this to my list of examples:
* A hospital AI algorithm discriminating against black people when providing additional healthcare outreach by amplifying racism already in the system. https://www.nature.com/articles/d41586-019-03228-6
* Misdiagnosing people of African decent with genomic variants misclassified as pathogenic due to most of our reference data coming from European/white males. https://www.nejm.org/doi/full/10.1056/NEJMsa1507092
* The dangers of ML in diagnosing Melanoma exacerbating healthcare disparities for darker skinned people. https://jamanetwork.com/journals/jamadermatology/article-abs...
And some other relevant, but not healthcare examples as well:
* When Google's hate speech detecting AI inadvertantly censored anyone who used vernacular referred to in this article as being "African American English". https://fortune.com/2019/08/16/google-jigsaw-perspective-rac...
* When Amazon's AI recruiting tool inadvertantly filtered out resumes from women. https://www.reuters.com/article/us-amazon-com-jobs-automatio...
* When AI criminal risk prediction software used by judges in deciding the severity of punishment for those convicted predicts a higher chance of future offence for a young, black first time offender than for an older white repeat felon. https://www.propublica.org/article/machine-bias-risk-assessm...
And here's some good news though:
* A hospital used AI to enable care and cut costs (though the reporting seems to over simplify and gloss over enough to make the actual analysis of the results a little suspect). https://www.healthcareitnews.com/news/flagler-hospital-uses-...
Because a false positive ruins lives? Is that not sufficient? This man’s arrest record is public and won’t disappear. Many employers won’t hire if you have an arrest record (regardless of conviction). His reputation is also permanently smeared. These records are permanently public and in fact some counties publish weekly arrest records on their websites and in newspapers (not that newspapers matter much anymore)
Someday this technology may be better and work more reliably. We’re not there yet. Right now it’s like the early days of voice recognition from the ‘90s.
Presumably, the facial recognition software would provide an additional filter/sort. But at least in my situation, I could actually see how big the total pool of potential matches and thus have a sense of uncertainty about false positives, even if I were completely ignorant about the impact of false negatives (i.e. what if my suspect didn't live within x-miles of the scene, or wasn't a known/convicted felon?)
So the caution re: face recognition software is how it may non-transparently add confidence to this already very imperfect filtering process.
(in my case, the suspect was eventually found because he had committed a number of robberies, including being clearly caught on camera, and in an area/pattern that was easy to narrow down where he operated)
This is absurdly dangerously. The AI will find people who look like the suspect, that’s how the technology works. A lineup as evidence will almost guarantee a bad outcome, because of course the man looks like the suspect!
Honest question: does race predict legal recourse when decoupled from socioeconomic status, or is this an assumption?
Of course we shouldn't assume it, but we absolutely should require it.
Uncertainty is a core part of policing which can't be removed.
Having a picture or just a description of the face is one of the most important pieces of information the police has in order to do actual policing. You can be arrested for just broadly matching the description if you happen to be in the vicinity.
Had the guy been convicted of anything just based on that evidence, this would be a scandal. As it is, a suspect is just a suspect and this kind of thing happens all the time, because humans are just as fallible. It's just not news when there's no AI involved.
Even disregarding the moral hazard of selecting an appropriate training set, the problem is that ML-based techniques are inherently biased. That's the entire point, to boil down a corpus of data into a smaller model that can generate guesses at results. ML is not useful without the bias.
The problem is that bias is OK in some contexts (guessing at letters that a user has drawn on a digitizer) and absolutely wrong in others (needlessly subjecting an innocent person to the judicial system and all of its current flaws). The difference is in four areas, how easily one can correct for false positives/negatives, how easy it is to recognize false output, how the data and results relate to objective reality, and how destructive bad results may be.
When Amazon product suggestions start dumping weird products on me because they think viewing pages is the same as showing interest in the product (vs. guffawing at weird product listings that a Twitter personality has found), the damage is limited. It's just a suggestion that I'm free to ignore. In particularly egregious scenarios, I've had to explain why weird NSFW results were showing up on my screen, but thankfully the person I'm married to trusts me.
When a voice dictation system gets the wrong words for what I am saying, fixing the problem is not hard. I can try again, or I can restart with a different modality.
In both of the previous cases, the ease of detection of false positives is simplified by the fact that I know what the end result should be. These technologies are assistive, not generative. We don't use speech recognition technology to determine what we are attempting to say, we use it to speed up getting to a predetermined outcome.
The product suggestion and dictation issues are annoying when encountering them because they are tied to an objective reality: finding products I want to buy, communicating with another person. They're only "annoying" because the mitigation is simple. Alternatively, you can just dispense with the real world entirely. When a NN "dreams" up pictures of dogs melting into a landscape, that is completely disconnected from any real thing. You can't take the hallucinated dog pictures for anything other than generative art. The purpose of the pictures is to look at the weird results and just say, "ah, that was interesting".
But facial recognition and "depixelization" fails on the first three counts, because they are attempts to reconnect the ML-generated results to a thing that exists in the real world, we don't know what the end results should be, and we (as potential users of the system) don't have any means of adjusting the output or escaping to a different system entirely. And when combined with the purpose of law enforcement, it fails on the fourth aspect, in that the modern judicial system in America is singularly optimized for prosecuting people, not determining innocence or guilt, but getting plea bargain deals out of people. Only 10% criminal cases go to trial. 99% of civil suits end in a settlement rather than a judgement (with 90% of the cases settling before ever going to trial). Even in just this case of the original article, this person and his family have been traumatized, and he has lost at least a full day of productivity, if not much, much more from the associated fallout.
When a company builds and markets a product that harms people, they should be held liable. Due to the very nature of how machine vision and learning techniques work, they'll never be able to address these problems. And the combination of failure in all four categories makes them particularly destructive.
The probability of finding an innocent with a similar enough face so that the witness can be fouled is much higher with AI.
I see this all the time when working with execs. I have to continually remind even very smart people with STEM undergrad and even graduate degrees that a computer vision system cannot magically see things that are invisible to the human eye.
"the computer said so" is way stronger than you would think.
I think the issue is that regardless of the answer, it isn't decoupled in real world scenarios.
I think the solution isn't dependent upon race either. It is to ensure everyone have access to legal recourse regardless of socioeconomic status. This would have the side effect of benefiting races correlated with lower socioeconomic status more.
I'm white. I grew up around a sea of white faces. Often when watching a movie filled with a cast of non-white faces, I will have trouble distinguishing one actor from another, especially if they are dressed similarly. This sometimes happens in movies with faces similar to the kinds I grew up surrounded by, but less so.
So unfortunately, yes, I probably do have more trouble distinguishing one black face from another vs one white face from another.
This is known as the cross-race effect and it's only something I became aware of in the last 5-10 years.
Add to that the fallibility of human memory, and I can't believe we still even use line ups. Are there any studies about how often line ups identify the wrong person?
The shoplifting incident occurred in October 2018 but it wasn’t until March 2019 that the police uploaded the security camera images to the state image-recognition system but the police still waited until the following January to arrest Williams. Unless there was something special about that date in October, there is no way for anyone to remember what they might have been doing on a particular day 15 months previously. Though, as it turns out, the NPR report states that the police did not even try to ascertain whether or not he had an alibi.
Also, after 15 months, there is virtually no chance that any eye-witness (such as the security guard who picked Williams out of a line-up) would be able to recall what the suspect looked like with any degree of certainty or accuracy.
This WUSF article [1] includes a photo of the actual “Investigative Lead Report” and the original image is far too dark for a anyone (human or algorithm) to recognise the person. It’s possible that the original is better quality and better detail can be discerned by applying image-processing filters – but it still looks like a very noisy source.
That same “Investigative Lead Report” also clearly states that “This document is not a positive identification … and is not probable cause to arrest. Further investigation is needed to develop probable cause of arrest”.
The New York Times article [2] states that this facial recognition technology that the Michigan tax-payer has paid millions of dollars for is known to be biased and that the vendors do “not formally measure the systems’ accuracy or bias”.
Finally, the original NPR article states that
> "Most of the time, people who are arrested using face recognition are not told face recognition was used to arrest them," said Jameson Spivack
[1] https://www.wusf.org/the-computer-got-it-wrong-how-facial-re...
[2] https://www.nytimes.com/2020/06/24/technology/facial-recogni...
> The police bungled this badly by setting up a fake photo lineup...*
FWIW, this process is similar to traditional police lineups. The witness is shown 4-6 people – one who is the actual suspect, and several that vaguely match a description of the suspect. When I was asked to identify a suspect in my robbery, the lineup included an assistant attorney who would later end up prosecuting the case. The police had to go out and find tall slight-skinned men to round out the lineup.
> ... with the loss prevention clerk who submitted the report (who had only ever seen the same footage they had).
Yeah, I would hope that this is not standard process. The lineup process is already imperfect and flawed as it is even with a witness who at least saw the crime first-hand.
Presumably the coupling of the variables is not binary (dependent or independent) but variable (degrees of coupling). Presumably these variables were more tightly coupled in the past than in the present. Presumably it's useful to understand precisely how coupled these variables are today because it would drive our approach to addressing these disparities. E.g., if the variables are loosely coupled then bias-reducing programs would have a marginal impact on the disparities and the better investment would be social welfare programs (and the inverse is true if the variables are tightly coupled).
Did you think I was asking about non-real-world scenarios? And how do we know that it's coupled (or rather, the degree to which it's coupled) in real world scenarios?
> I think the solution isn't dependent upon race either. It is to ensure everyone have access to legal recourse regardless of socioeconomic status. This would have the side effect of benefiting races correlated with lower socioeconomic status more.
This makes sense to me, although I don't know what this looks like in practice.
The linked story is audio only and is associated with the Morning Edition broadcast, but the full story appears under our Special Series section.
(I work for NPR)
Let's say that there are a million people, and the police have photos of 100,000 of them. A crime is committed, and they pull the surveillance of it, and match against their database. They have a funky image matching system that has a false positive rate of 1 in 100,000 people, which is way more accurate than I think facial recognition systems are right now, but let's just roll with it. Of course, on average, this system will produce one positive hit per search. So, the police roll up to that person's home and arrest them.
Then, in court, they get to argue that their system has a 1 in 100,000 false positive rate, so there is a chance of 1 in 100,000 that this person is innocent.
Wrong!
There are ten people in the population of 1 million that the software would comfortably produce a positive hit for. They can't all be the culprit. The chance isn't 1 in 100,000 that the person is innocent - it is in fact at least 9 out of 10 that they are innocent. This person just happens to be the one person out of the ten that would match that had the bad luck to be stored in the police database. Nothing more.
This needs to be coupled with the truth that people (police) without diverse racial exposure are terrible at identifying people outside of their ethnicity. In the photo/text article they show the top of the "Investigative Lead Report" as an image. You mean to say that every cop who saw the two images side by side did not stop and say "hey, these are not the same person!" They did not, and that's because their own brains' could not see the difference.
This is a major reason police forces need to be ethnically diverse. Just that enables those members of the force who never grew up or spent time outside their ethnicity can learn to tell a diverse range of similar but different people outside their ethnicity apart.
The current best practice is to have a witness pick out the suspect from 6 photos. It should be immediately obvious that right off the bat there's a 17% chance of the witness randomly picking the "right" person. It's a terrible way to do things and it's no surprise that people are wrongly convicted again and again on eyewitness testimony.
But for a photo lineup I can't imagine why you don't have least 25 photos to pick from.
They should be, however a company building and marketing a harmful product is a separate issue from cops using specious evidence to arrest a man.
Cops (QI aside), are responsible for the actions they take. They shouldn't be able to hide behind "the tools we use are bad", especially when (as a parent poster said), the tool is known to be bad in the first place and the cops still used it.
> The detective turned over the first piece of paper. It was a still image from a surveillance video, showing a heavyset man, dressed in black and wearing a red St. Louis Cardinals cap, standing in front of a watch display. Five timepieces, worth $3,800, were shoplifted.
> “Is this you?” asked the detective.
> The second piece of paper was a close-up. The photo was blurry, but it was clearly not Mr. Williams. He picked up the image and held it next to his face.
All the preceding grafs are told in the context of "this what Mr. Williams said happened", most explicitly this one:
> “When’s the last time you went to a Shinola store?” one of the detectives asked, in Mr. Williams’s recollection.
According to the ACLU complaint, the DPD and prosecutor have refused FOIA requests regarding the case:
https://www.aclu.org/letter/aclu-michigan-complaint-re-use-f...
> Yet DPD has failed entirely to respond to Mr. Williams’ FOIA request. The Wayne County Prosecutor also has not provided documents.
(2) Your argument strikes me as somewhat similar to "I feel fine why should I keep taking my medicine?". It's not exactly the same as the medicine is scientifically proven to cure disease while it's impossible to measure the impact of police on crime. But "things are getting better so we should change what we're doing" is not a particularly sound logical argument.
> purchasing officer at the police department
However, if the criminal justice system decides that this is an acceptable use of software, then the criminal justice system itself also bears responsibility.
The developer of the software also bears the responsibility for developing, marketing, and selling the software for the police department.
I agree that the PD bears the majority of the culpability here, but I disagree that it bears every ounce of fault that could exist in this scenario.
Facial recognition technology flagged 26 California lawmakers as criminals. (August 2019)
https://www.mercurynews.com/2019/08/14/facial-recognition-te...
"The Shinola shoplifting occurred in October 2018. Katherine Johnston, an investigator at Mackinac Partners, a loss prevention firm, reviewed the store’s surveillance video and sent a copy to the Detroit police"
"In this case, however, according to the Detroit police report, investigators simply included Mr. Williams’s picture in a “6-pack photo lineup” they created and showed to Ms. Johnston, Shinola’s loss-prevention contractor, and she identified him. (Ms. Johnston declined to comment.)"
This arrest happened 6 months ago. Who else besides the suspect and the police do you believe reporters should ask for "basic corroboration" of events that took place inside a police station? Or do you think this story shouldn't be reported on at all until the police agree to give additional info?
This is the lead provided:
https://wfdd-live.s3.amazonaws.com/styles/story-full/s3/imag...
Note that it says in red and bold emphasis:
THIS DOCUMENT IS NOT A POSITIVE IDENTIFICATION. IT IS AN INVESTIGATIVE LEAD ONLY AND IS NOT PROBABLE CAUSE TO ARRREST. FURTHER INVESTIGATION IS NEEDED TO DEVELOP PROBABLE CAUSE TO ARREST.
The real negligence here is whoever tuned the software to spit out a result for that quality of image rather than a "not enough data, too many matches, please submit a better image" error.
Even the "good" use cases like unlocking your phone have security problems because malicious people can use photos or videos of your face and you can't change your face like you would a breached username and password.
The deeper reform that needs to happen here is that every person falsely arrested and/or prosecuted needs to be automatically compensated for their time wasted and other harm suffered. Only then will police departments have some incentive for restraint. Currently we have a perverse reverse lottery where if you're unlucky you just lose a day/month/year of your life. With the state of what we're actually protesting I'm not holding my breath (eg the privileged criminals who committed the first degree murder of Breonna Taylor still have yet to be charged), but it's still worth calling out the smaller injustices that criminal "justice" system inflicts.
They are in the business of exposing you to as many paid ads as possible. And they believe providing outgoing links reduces their ability to do that.
But literally no one in this thread is arguing to not hold them responsible.
Everyone agrees that yes, the cops and PD are responsible. It's just that some people are arguing that there are other parties that also bear responsibility.
No one thinks the cops should be able to hide behind the fact that the tool is bad. I think these cops should be fired, sued for a wrongful arrest. I think QI should be abolished so wronged party can go after the house of the officer that made the arrest in a civil court. I think the department should be on the hook for a large settlement payment.
But I also think the criminal justice system should enjoin future departments from using this known bad technology. I think we should also be mad at the technology vendors that created this bad tool.
Criminologists aren't certain about surveillance having a positive or negative effects on crime. We have more than 40 studies with mixed results. What is certain with that this kind of surveillance isn't responsible for the falling crime rates described. Most data is from the UK. Currently I don't think countries without surveillance fair worse on crime. Maybe quite to the contrary.
"what we're doing" is not equivalent to increasing video surveillance or generally increasing armament in civil spaces. It may be sound logic if you extend the benefit of the doubt but it may also just be a false statement.
Since surveillance is actually constitutionally forbidden in many countries, on could argue that deployment would "increase crime".
In some other sound logic it might just be a self reinforcing private prison industry with economic interests to keep a steady supply of criminals. Would also be completely sound.
But all these discussions are quite dishonest, don't you think? I just don't want your fucking camera in my face.
> Authorities said he was not carrying identification at the time of his arrest and was not cooperating. … an issue with the fingerprint machine ultimately made it difficult to identify the suspect, … A source said officials used facial recognition technology to confirm his identity.
https://en.wikipedia.org/wiki/Capital_Gazette_shooting#Suspe...
> Police, who arrived at the scene within a minute of the reported gunfire, apprehended a gunman found hiding under a desk in the newsroom, according to the top official in Anne Arundel County, where the attack occurred.
https://www.washingtonpost.com/local/public-safety/heavy-pol...
This doesn't really seem like an awesome use of facial recognition to me. He was already in custody after getting picked up at the crime scene. I doubt he would have been released if facial recognition didn't exist.
https://www.automaticsync.com/captionsync/what-qualifies-as-... (see section: "High Quality Captioning")
I am not aware of many TV shows that offer audio commentary for the visually impaired.
Here is an example of one that does.
https://www.npr.org/2015/04/18/400590705/after-fan-pressure-...
Edit: one source says it is 400 million new cameras: https://www.cbc.ca/passionateeye/m_features/in-xinjiang-chin...
Our institutions and systems (and maybe humans in general) are not robust enough to cleanly handle these powers, and we are making the same mistake over and over and over again.
Human error is preferable, even if it is more frequent than the alternative, when it comes to justice. The more human the better.
Humans can be held accountable.
The technology is certainly not robust enough to be trusted to work correctly at that level yet. Even if it was improved I think there is a huge moral issue with the police having the power to use it indiscriminately on the street.
She later won on appeal in part because the defense showed that the testimony and argument of the original statisticians were wrong.
This stuff is so easy to get wrong. A little knowledge of statistics can be dangerous.
Someone I know received vehicular fines from San Francisco on an almost weekly basis solely from license plate reader hits. The documentary evidence sent with the fines clearly showed her car had been misidentified but no one ever bothered to check. She was forced to fight each and every fine because they come with a presumption of guilt, but as soon as she cleared one they would send her a new one. The experience became extremely upsetting for her, the entire bureaucracy simply didn't care.
It took threats of legal action against the city for them to set a flag that apparently causes violations attributed to her car to be manually reviewed. The city itself claimed the system was only 80-90% accurate, but they didn't believe that to be a problem.
But being biased by the skin color of the driver is (AFAIK) not one of them. Which is exactly the problem with vision systems applied to humans, at least the ones we've seen deployed so far.
If a system discriminates against a specific population, that's very different from (indiscriminately) being unreliable.
NPR is a non-profit that is mostly funded by donations. They only have minimal paid ads on their website to pay for running costs - they could easily optimize the news pages to increase ad revenue but they don't because it would get in the way of their goals.
I agree here, but doing that may lead to the prosecutors trying extra hard to find something to charge a person with after they are arrested, even if it was something trivial that would often go un-prosecuted.
Getting the details right seems tough, but doable.
Faces generated by AI means should not count as 'probable cause' to go and arrest people. They should count as fantasy.
This is not correct. The "6-pack" was shown to a security firm's employee, who had viewed the store camera's tape.
"In this case, however, according to the Detroit police report, investigators simply included Mr. Williams’s picture in a “6-pack photo lineup” they created and showed to Ms. Johnston, Shinola’s loss-prevention contractor, and she identified him." [1]
[1] ibid.
They don't:
https://wfdd-live.s3.amazonaws.com/styles/story-full/s3/imag...
There was further work involved, there was a witness who identified the man on a photo lineup, and so on. The AI did not identify anyone, it gave a "best effort" match. All the actual mistakes were made by humans.
The mistake is to treat any police department as a good-faith participant in the goal of reducing police violence. Any tool you give them will be used to brutalize. The only solution is to give them less.
While there's a lot of head nodding, nothing is ever actually addressed in day to day operations. Data scientists barely know what's going on when they throw things through TensorFlow. What matters is the outcome and the confusion matrix at the end.
I say this as someone who works in data and implements AI/ML platforms. Mr. Williams needs to find the biggest ambulance chasing lawyer and file civil suits not only the law enforcement agencies involved, but top down everyone at DataWorks from the president to the data scientist to the lowly engineer who put this in production.
These people have the power to ruin lives. They need to be made an example of and held accountable for the quality of their work.
[1] https://github.com/NVlabs/stylegan [2] https://arxiv.org/pdf/2003.03808.pdf (ctrl+f ffhq)
If I'm searching for a murderer in a town of 1000, it takes about 10 independent bits of evidence to get the right one. And when I charge someone, I must already have the vast majority of that evidence. To say "oh well we don't know that it wasn't Mr. or Mrs. Doe, let's bring them in" is itself a breach of the Does' rights. I'm ignoring 9 of the 10 bits of evidence!
Using a low-accuracy facial recognition system and a low-accountability lineup procedure to elevate some random man who did nothing wrong from presumed-innocent to 1-in-6 to prime suspect, without having the necessary amount of evidence, is committing the exact same error and is nearly as egregious as pulling a random civilian out of a hat and charging them.
The 4th sentence says: "Detectives zoomed in on the grainy footage..."
> "They never even asked him any questions before arresting him. They never asked him if he had an alibi. They never asked if he had a red Cardinals hat. They never asked him where he was that day," said lawyer Phil Mayor with the ACLU of Michigan.
When I was fired by an automated system, no one asked if I had done something wrong. They asked me to leave. If they had just checked his alibi, he would have been cleared. But the machine said it was him, so case closed.
Not too long ago, I wrote a comment here about this [1]:
> The trouble is not that the AI can be wrong, it's that we will rely on its answers to make decisions.
> When the facial recognition software combines your facial expression and your name, while you are walking under the bridge late at night, in an unfamiliar neighborhood, and you are black; your terrorist score is at 52%. A police car is dispatched.
Most of us here can be excited about Facial Recognition technology but still know that it's not something to be deployed in the field. It's by no means ready. We might even consider the moral ethics before building it as a toy.
But that's not how it is being sold to law enforcement or other entities. It's _Reduce crime in your cities. Catch criminals in ways never thought possible. Catch terrorists before they blow up anything._ It is sold as an ultimate decision maker.
>When Amazon's AI recruiting tool inadvertantly filtered out resumes from women
>When Google's hate speech detecting AI inadvertantly censored anyone who used vernacular referred to in this article as being "African American English
There's simply no indication that these aren't statistically valid priors. And we have mountains of scientific evidence to the contrary, but if dared post anything (cited, published literature) I'd be banned. This is all based on the unfounded conflation between equality of outcome and equality of opportunity, and the erasure of evidence of genes and culture playing a role in behavior and life outcomes.
This is bad science.
Many of these cops are earning $200k plus annually! Our law enforcement system is ridiculous and needs an overhaul.
1) Make it avoid black people, i.e. they aren't stored in the database and aren't processed when scanned.
2) Put a 5 year hiatus on commercial / public use.
Either of these things are more acceptable than too many false positives. #1 is really interesting to me as a thought experiment because it makes everyone think twice.
> the erasure of evidence of genes and culture playing a role in behavior and life outcomes
are concerning.
Maybe we just outlaw face recognition in criminal justice entirely.
* Revenue from tickets/fines/etc. shouldn't go to into police pockets or it just incentives rent seeking behavior by people with law enforcement powers
* Settlements should come out of their budget not city insurance.
This is why I've never stopped for receipt checks, because it's my receipt, and I've paid. The security theatre is just bad comedy.
Just because the machine says I've done a no no, doesn't mean I can't come back and win a lawsuit later. It doesn't abdicate cops from their jobs. I have a winning complexion, so I'll never enjoy a false positive, but if I do, I'll make sure it bankrupts whatever startup incubator garbage decided to shill a replacement for real law enforcement.
You can join this movement by urging your local government officials to follow suit.
Exactly what I thought when I've read about this. It's not like humans are great at matching faces either. In fact machines have been better at facial recognition for over a decade now. I bet there are hundreds of people (of all races) in prison right now who are there simply because they were mis-identified by a human. Human memory, even in the absence of bias and prejudice, is pretty fallible.
There is a notion of "mixture of experts" in machine learning. It's when you have two or more models that are not, by themselves, sufficiently good to make a robust prediction, but that make different kinds of mistakes, and you use the consensus estimate. The resulting estimate will be better than any model in isolation. The same should be done here - AI should be merely a signal, it is not a replacement for detective work, and what's described in the article is just bad policing. AI has very little to do with that.
Software can kill. This software can kill 50% of black people.
There is clothing available that can confuse facial recognition systems. What would happen if, next time you go for your drivers license photo, you wore a T shirt designed to confuse facial recognition, for example like this one? https://www.redbubble.com/i/t-shirt/Anti-Surveillance-Clothi...
Great job police
This is a meaningless statement, you could choose literally any number for this statement, because you are missing the denominator.
The largest - over arching idea - is to get everyone to think twice by making the majority think twice - If white people think 'its only for us!?' it'll make them really study the effects.. (I'm white.)
Even if it was correct 99% of the time, we need to recognize that software can make mistakes. It is a tool, and people need to be responsible enough to use it correctly. I think I agree with your general idea here, but to put all of the blame on software strikes me as an incomplete assessment. Technically the software isn't killing anyone, irresponsible users of it are.
After reading your story, I am very glad that we probably have in aggregate 2 or 3 full-time employees doing things that might be automated away. It's not like that prevents mindless bureaucracy of all sorts, but something like your situation would certainly never happen.
Sure, but at this point, we know how irresponsible users often are, we know this to be a an absolute fact. If the fact of user’s irresponsibility isn’t the centerpiece of our conversations, then we’re being incredibly irresponsible ourselves.
The material manifestations of how these tools will be used has to remain at the center if researchers place any value whatsoever on our ethical responsibilities.
I had forgotten about the routine of fighting traffic tickets multiple times a year as a fact of life. Let alone fender benders. I had only been reveling in the lack of a frustrating commute.
Last decade I did get a car for 3 months, and the insurance company was so thrilled that I was "such a good driver" because of my "spotless record" for many years. Little do they know I just don't drive and perhaps have now less experience than others. Although tangentially, their risk matrix actually might be correct, if I can afford to live in dense desirable areas then maybe it is less likely that I would be going fast and getting into circumstances that pull from their insurance pool at larger amounts.
They probably thought "one of the largest companies in the world probably chauffeurs him down the highway in a bus anyway"
I have written great software, yet it sometimes had bugs or un-intended consequences. I cannot imagine how I'd feel if it were to accidentally alter someones life negatively like this.
that's what happens if you're lucky
Call me weak, but I think about the "what ifs" a bit too much in those cases. What if my bug keeps them from selling their stock and they lose their savings? What if the wrong person is arrested, etc?
"I guess the computer got it wrong" is a terrifying thing for a police officer to say.
It's beyond irresponsibility - it's actively malevolent. There unfortunately are police officers, as demonstrated by recent high profile killings by police, who will use the thinnest of pretexts, like suspicion of paying with counterfeit bills, to justify the use of brutal and lethal force.
If such people are empowered by a facial recognition match, what's to stop them from similarly using that as a pretext for applying disproportionate brutality?
Even worse, a false positive match triggered arrest may be more likely to escalate to violence because the person being apprehended would be rightfully upset that they were being targeted, and appear to be resisting arrest.
Irresponsible users, yes, but in the users who are using the software as it was marketed for use.
How did the people in the 6 pack photo line-up match up against the facial recognition? Were they likely matches?
This case of "one in a million" does not happen frequently.
So unequal treatment based on race has quite literally been a feature of the US justice system, independent of socioeconomic status.
None of those selling points logically lead to the conclusion that it is the ultimate decisions maker.
Similarly we shouldn’t collect vast databases of fingerprints or DNA and search them for every crime.
Why? Because error rates are unavoidable. There is some uncertainty, and in large enough numbers you will find false matches with perfect DNA matching.
We must keep our senses and use these technologies to help us rather than find the hidden bad guy.
Facial recognition software doesn't have the level of reliability that control software for mechanical systems has. And if a mistake is made, the consequences to the LEO have been historically minimal. Shoot first and ask questions later has been deemed acceptable conduct, so why not implicitly trust in the software? If it's right and you kill a terrorist, you're a hero. If it's wrong and you kill a civilian, the US Supreme Court has stated, "Where the officer has probable cause to believe that the suspect poses a threat of serious physical harm, either to the officer or to others, it is not constitutionally unreasonable to prevent escape by using deadly force." The software provides probable cause, the subject's life is thereby forfeit. From the perspective of the officer, seems a no-brainer.
https://www.nature.com/articles/s41591-020-0942-0
Bit like with self driving cars - if it's not perfect we don't know how to integrate it with people
One more thing, the article was being to dramatic about the whole incident.
Do you work in a commercial software firm? Have you ever seen your salespeople talk with their customer contacts?
The salespeople and marketing departments at the firms that make this technology and target law enforcement markets are, 100%, full stop, absolutely making claims that you can trust the software to have full control over the situation, and you, the customer, should not worry about whether the software should or should not have that control.
Being able to use something "irresponsibly" and disclaim responsibility because AI made the decision is. a. selling. point. Prospective customers want. to. give. up. that. authority. and. that. responsibility.
Making the sort of decisions we ask this shit to make is hard, if you're a human, because it's emotionally weighty and fraught with doubt, and it should be, because the consequences of making the wrong decision are horrific. But if you're a machine, it's not so hard, because we didn't teach the machines to care about anything other than succeeding at clearly-defined tasks.
It's very easy to make the argument that the machines can't do much more, because that argument is correct given what tech we have currently. But that's not how the tech is sold--it becomes a miracle worker, a magician, because that's what it looks like to laypeople who don't understand that it's just a bunch of linear algebra cobbled together into something that can decide a well-defined question. Nobody's buying a lump of linear algebra, but many people are quite willing to buy a magical, infallible oracle that removes stressful, difficult decisions from their work, especially in the name of doing good.
tl;dr capitalism is a fuck. we can pontificate about the ethical use of the Satan's toys as much as we like; all that banter doesn't matter much when they're successfully sold as God's righteous sword.
https://features.propublica.org/navy-accidents/uss-fitzgeral...
https://features.propublica.org/navy-uss-mccain-crash/navy-i...
Software allows us to work very efficiently because it can speed work up. It can speed us up when fucking things up just as well.
Brookings had a great post about this the other day: https://www.brookings.edu/blog/how-we-rise/2020/06/11/to-add...
It's like in human hierarchies - it's often not the person who is more likely to make the best decision who gets to decide, it's the one who is going to bear the consequences of being wrong.
THIS DOCUMENT IS NOT A POSITIVE IDENTIFICATION. IT IS AN INVESTIGATIVE LEAD ONLY AND IS _NOT_ PROBABLE CAUSE TO ARREST. FURTHER INVESTIGATION IS NEEDED TO DEVELOP PROBABLE CAUSE TO ARREST.
I mean, what else could the technologists have done?
But in the US, I've heard that it can make it harder to get a job.
I believe I'm starting to get a feel for how the school to prison pipeline may work.
So that if we print 52% on the screen, that means we've already gathered like 30-bits of evidence (30 coin flips all coming up heads), at which point the suspicion would be real.
Also, can everyone afford to pursue lawsuits?
Then the justice system would implode. Judicial policy is "software" too, and nobody holds the judiciary or police to that absurd level of excellence, even if we're talking about the death penalty.
The consequence that is much worse would be mass incarceration of certain groups, because the AI is too good at catching people who actually did something.
This second wave of mass incarceration will lead to even more single parent families and poor households, and will reinforce the current situation.
A cop stopping someone that has a resemblance to a criminal for questioning seems like a good thing to me, as long as the cop knows that there's a reasonable chance it's the wrong guy.
Though the suffering of the victims of such wrong matches is real, one consolation is that more of such cases will hopefully bring about the much needed scepticism in the results so that some old-fashioned validation/investigation is done.
And that's also the core argument why some countries abolished death penalty.
Consider that not everyone understands how machine learning, and specifically classifier algorithms work. When a police officer is told the confidence level is above 75% he's going to think that's a low chance of being wrong. He does not have the background in math to realize that given a large enough real population size being classified via facial recognition, a 75% confidence level is utterly useless.
The reported 75% confidence level is only valid when scanning a population size that is at most as large as the training data set's. However, we have no way of decreasing that confidence level to be accurate when comparing against the real world population size of an area without simply making the entire real population the training set. And none of that takes circumstances like low light level or lens distortion into account. The real confidence of a match after accounting for those factors would put nearly all real world use cases below 10%.
Now imagine that the same cop you have to explain this to has already been sold this system by people who work in sales and marketing. Any expectation that ALL police officers will correctly assess the systems results and behave accordingly fails to recognize that cops are human, and above all, cops are not mathematicians or data scientists. Perhaps there are processes to give police officers actionable information and training that would normally avoid problems, but all it takes is one cop getting emotional about one possible match for any carefully designed system to fail.
Again, the frequency of cops getting emotional or simply deciding that even a 10% possibility that someone they are about to question might be dangerous is too high a risk, is unlikely to change. So,providing them a system which increases their number of actionable leads and therefore interactions with the public can only increase the number incidents where police end up brutalizing or even killing someone innocent.
I'd consider reading the sources I posted in my comment before responding with ill-conceived notions. Literally every single example I posted linked to the peer-reviewed scientific evidence (cited, published literature) indicating the points I summarized.
The only link I posted without peer-reviewed literature was the last one with the positive outcome, and that's the one I commented had suspect analysis.
Yes, it is. Security cannot stop you for bypassing alarms and receipt checks. They have to have definitive proof that you stole something before they can lay a hand on you. Even in membership stores like Costco, the most they can do is cancel your membership. If they do touch you, there are plenty of lawyers who will take your case and only collect payment if you win.
The average human sucks at understanding probabilities.
Until we can prove that most people handling this system are capabable of smart decision making, which the latest police scandals do not lead to believe right now, those systems should not be used.
> If they do touch you, there are plenty of lawyers who will take your case and only collect payment if you win.
This falls squarely into the genre of "yes, you are technically right, but you may have spent a week in jail and thousands to tens of thousands of dollars of time and money to prove it, for which you will not be fully compensated."
If your strategy is to get rid of all pretexts for police action, I don't think that is the right one. Instead we need to set a high standard of conduct and make sure it is upheld. If you don't understand a tool, don't use it. If you do something horrible while using a tool you don't understand, it is negligent/irresponsible/maybe even malevolent, because it was your responsibility to understand it before using it.
A weatherman saying there is a 90% chance of rain is not evidence that it rained. And I understand the fear that a prediction can be abused, and we need to make sure it isn't abused. But abolishing the weatherman isn't the way to do it.
Not at all.
> Instead we need to set a high standard of conduct and make sure it is upheld
Yes, but we should be real about what this means. The institution of law enforcement is rotten, which is why it protects bad actors to such a degree. It needs to be cleaved from its racist history and be rebuilt nearly from the ground up. Better training in interpreting results from an ML model won't be enough by a long shot.
I’m pretty sure that this can be used fairly with a rigorous Bayesian treatment.
1. Google's routing algorithm is conditioned on demographics
2. Google's routing algorithm is conditioned on income/wealth
3. Google's routing algorithm is conditioned on crime density
4. Google's routing algorithm cannot condition on anything that would disproportionately route users away from minority neighborhoods
I think the rational choice, to avoid forcing other people to take risks that they may object to, is somewhere between 2 and 3. But the current social zeitgeist seems only to allow for option four, since an optimally sampled dataset will have very strong correlations between 1-3, to the point that in most parts of the us they would all result in the same routing bias.
If you survive violence at the hands of law enforcement and are not convicted of a crime, or if you don't and your family wants to hold law enforcement accountable, then the first option is to ask the local public prosecutor to pursue criminal charges against your attackers.
Depending on where you live could be a challenge, given the amount of institutional racial bias in the justice system, and how closely prosecutors tend to work with police departments. After all, if prosecutors were going after police brutality cases aggressively, there likely wouldn't be as much of a problem as there is.
If that's fruitless, you would need to seek the help of a civil rights attorney to push your case in the the legal system and/or the media. This is where a lot of higher profile cases like this end up - and often only because they were recorded on video.
The volume of tickets issued is quite staggering, and each one is a huge annoyance for someone.
>The problem was that it mis-classified entire dialects of English (meaning it completely failed at determining sentiment for certain people), deleting all comments from the people of certain cultures
What happens in the case that a particular culture is more hateful? Do we just disregard any data that indicates socially unacceptable bias?
What, only Nazis are capable of hate speech?
Not by security or police, so my point still stands.
That's not what was happening. If you read the link, you'll see the problem is that the AI/ML system was mis-classifying non-hateful speech as hateful, just because of the dialect being used.
If it were the case that the culture was more hateful, then it wouldn't have been considered "mis-classification."
> You're completely missing my point.
I'm not missing your point; it's just not a well-reasoned or substantiated point. Here were your points:
> There's simply no indication that these aren't statistically valid priors.
We do have every indication that this wasn't what was happening in literally every single example I posted. You just have to read them.
> And we have mountains of scientific evidence to the contrary, but if dared post anything (cited, published literature) I'd be banned.
You say that, and yet you keep posting your point without any evidence whatsoever. Meanwhile, every single example I posted did cite peer-reviewed, published scientific evidence.
> This is all based on the unfounded conflation between equality of outcome and equality of opportunity, and the erasure of evidence of genes and culture playing a role in behavior and life outcomes.
Again, peer-reviewed published literature disagrees. Reading it explains why the point that it's all unfounded conflation is incorrect.