zlacker

[parent] [thread] 10 comments
1. crazyg+(OP)[view] [source] 2020-06-24 15:01:09
In this particular case, computerized facial recognition is not the problem.

Facial recognition produces potential matches. It's still up to humans to look at footage themselves and use their judgment as to whether it's actually the same person or not, as well as to judge whether other elements fit the suspect or not.

The problem here is 100% on the cop(s) who made that call for themselves, or intentionally ignored obvious differences. (Of course, without us seeing the actual images in question, it's hard to judge.)

There are plenty of dangers with facial recognition (like using it at scale, or to track people without accountability), but this one doesn't seem to be it.

replies(2): >>ncalla+71 >>JoeSmi+M3
2. ncalla+71[view] [source] 2020-06-24 15:06:16
>>crazyg+(OP)
> The problem here is 100% on the cop(s) who made that call for themselves

I disagree. There is plenty of blame on the cops who made that call for themselves, true.

But there doesn't have to be a single party who is at fault. The facial recognition software is badly flawed in this dimension. It's well established that the current technologies are racially biased. So there's at least some fault in the developer of that technology, and the purchasing officer at the police department, and a criminal justice system that allows it to be used that way.

Reducing a complex problem to a single at-fault person produces an analysis that will often let other issues continue to fester. Consider if the FAA always stopped the analysis of air-crashes at: "the pilot made an error, so we won't take any other corrective actions other than punishing the pilot". Air travel wouldn't nearly as safe as it is today.

While we should hold these officers responsible for their mistake (abolish QI so that these officers could be sued civilly for the wrongful arrest!), we should also fix the other parts of the system that are obviously broken.

replies(1): >>dfxm12+L2
◧◩
3. dfxm12+L2[view] [source] [discussion] 2020-06-24 15:12:22
>>ncalla+71
The facial recognition software is badly flawed in this dimension. It's well established that the current technologies are racially biased.

Who decided to use this software for this purpose, despite these bad flaws and well established bias? The buck stops with the cops.

replies(4): >>Jtsumm+Q4 >>moron4+Ed >>goliat+ns >>ncalla+Ex
4. JoeSmi+M3[view] [source] 2020-06-24 15:16:28
>>crazyg+(OP)
You are being downvoted but you are 100% right.

The justification for depriving someone of their liberty lies solely with the arresting officer. They can base that on whatever they want, as long as they can later justify it to a court.

For example, you might have a trusted informant who could tell you who committed a local burglary, just this on its own could be legitimate grounds to make an arrest. The same informant might walk into a police station and tell the same information to someone else, for that officer, it might not be sufficient to justify an arrest.

◧◩◪
5. Jtsumm+Q4[view] [source] [discussion] 2020-06-24 15:21:12
>>dfxm12+L2
The cops, the politicians who fund them, the voters who elect the politicians (and possible some of the higher up police ranks), the marketers who sold it to the politician and cops, the management that directed marketing to sell to law enforcement, the developers who let management sell a faulty product, the developers who produced a faulty product.

Plenty of blame to go around.

◧◩◪
6. moron4+Ed[view] [source] [discussion] 2020-06-24 15:55:23
>>dfxm12+L2
There's also the company that built the software and marketed it to law enforcement.

Even disregarding the moral hazard of selecting an appropriate training set, the problem is that ML-based techniques are inherently biased. That's the entire point, to boil down a corpus of data into a smaller model that can generate guesses at results. ML is not useful without the bias.

The problem is that bias is OK in some contexts (guessing at letters that a user has drawn on a digitizer) and absolutely wrong in others (needlessly subjecting an innocent person to the judicial system and all of its current flaws). The difference is in four areas, how easily one can correct for false positives/negatives, how easy it is to recognize false output, how the data and results relate to objective reality, and how destructive bad results may be.

When Amazon product suggestions start dumping weird products on me because they think viewing pages is the same as showing interest in the product (vs. guffawing at weird product listings that a Twitter personality has found), the damage is limited. It's just a suggestion that I'm free to ignore. In particularly egregious scenarios, I've had to explain why weird NSFW results were showing up on my screen, but thankfully the person I'm married to trusts me.

When a voice dictation system gets the wrong words for what I am saying, fixing the problem is not hard. I can try again, or I can restart with a different modality.

In both of the previous cases, the ease of detection of false positives is simplified by the fact that I know what the end result should be. These technologies are assistive, not generative. We don't use speech recognition technology to determine what we are attempting to say, we use it to speed up getting to a predetermined outcome.

The product suggestion and dictation issues are annoying when encountering them because they are tied to an objective reality: finding products I want to buy, communicating with another person. They're only "annoying" because the mitigation is simple. Alternatively, you can just dispense with the real world entirely. When a NN "dreams" up pictures of dogs melting into a landscape, that is completely disconnected from any real thing. You can't take the hallucinated dog pictures for anything other than generative art. The purpose of the pictures is to look at the weird results and just say, "ah, that was interesting".

But facial recognition and "depixelization" fails on the first three counts, because they are attempts to reconnect the ML-generated results to a thing that exists in the real world, we don't know what the end results should be, and we (as potential users of the system) don't have any means of adjusting the output or escaping to a different system entirely. And when combined with the purpose of law enforcement, it fails on the fourth aspect, in that the modern judicial system in America is singularly optimized for prosecuting people, not determining innocence or guilt, but getting plea bargain deals out of people. Only 10% criminal cases go to trial. 99% of civil suits end in a settlement rather than a judgement (with 90% of the cases settling before ever going to trial). Even in just this case of the original article, this person and his family have been traumatized, and he has lost at least a full day of productivity, if not much, much more from the associated fallout.

When a company builds and markets a product that harms people, they should be held liable. Due to the very nature of how machine vision and learning techniques work, they'll never be able to address these problems. And the combination of failure in all four categories makes them particularly destructive.

replies(1): >>dfxm12+av
◧◩◪
7. goliat+ns[view] [source] [discussion] 2020-06-24 16:43:31
>>dfxm12+L2
I guess the argument would be that some companies are pushing- actively selling- the technology to PDs. My experience listening to the sales pitch by our sales team - of tech I helped develop; they would not only ignore the caveats attached to the products by engineering but straight out sell features that were not done, not even in the roadmap, or just physically impossible to implement as sold. With that in mind I can see how the companies selling these solutions are responsible as well.
◧◩◪◨
8. dfxm12+av[view] [source] [discussion] 2020-06-24 16:53:32
>>moron4+Ed
When a company builds and markets a product that harms people, they should be held liable.

They should be, however a company building and marketing a harmful product is a separate issue from cops using specious evidence to arrest a man.

Cops (QI aside), are responsible for the actions they take. They shouldn't be able to hide behind "the tools we use are bad", especially when (as a parent poster said), the tool is known to be bad in the first place and the cops still used it.

replies(2): >>moron4+ZA >>ncalla+aP
◧◩◪
9. ncalla+Ex[view] [source] [discussion] 2020-06-24 17:03:34
>>dfxm12+L2
Sure, and that was one of the parties I listed as being at fault:

> purchasing officer at the police department

However, if the criminal justice system decides that this is an acceptable use of software, then the criminal justice system itself also bears responsibility.

The developer of the software also bears the responsibility for developing, marketing, and selling the software for the police department.

I agree that the PD bears the majority of the culpability here, but I disagree that it bears every ounce of fault that could exist in this scenario.

◧◩◪◨⬒
10. moron4+ZA[view] [source] [discussion] 2020-06-24 17:18:21
>>dfxm12+av
This is why I wrote "also", not "instead".
◧◩◪◨⬒
11. ncalla+aP[view] [source] [discussion] 2020-06-24 18:18:22
>>dfxm12+av
> Cops (QI aside), are responsible for the actions they take. They shouldn't be able to hide behind "the tools we use are bad", especially when (as a parent poster said), the tool is known to be bad in the first place and the cops still used it.

But literally no one in this thread is arguing to not hold them responsible.

Everyone agrees that yes, the cops and PD are responsible. It's just that some people are arguing that there are other parties that also bear responsibility.

No one thinks the cops should be able to hide behind the fact that the tool is bad. I think these cops should be fired, sued for a wrongful arrest. I think QI should be abolished so wronged party can go after the house of the officer that made the arrest in a civil court. I think the department should be on the hook for a large settlement payment.

But I also think the criminal justice system should enjoin future departments from using this known bad technology. I think we should also be mad at the technology vendors that created this bad tool.

[go to top]