zlacker

[parent] [thread] 3 comments
1. moron4+(OP)[view] [source] 2020-06-24 15:55:23
There's also the company that built the software and marketed it to law enforcement.

Even disregarding the moral hazard of selecting an appropriate training set, the problem is that ML-based techniques are inherently biased. That's the entire point, to boil down a corpus of data into a smaller model that can generate guesses at results. ML is not useful without the bias.

The problem is that bias is OK in some contexts (guessing at letters that a user has drawn on a digitizer) and absolutely wrong in others (needlessly subjecting an innocent person to the judicial system and all of its current flaws). The difference is in four areas, how easily one can correct for false positives/negatives, how easy it is to recognize false output, how the data and results relate to objective reality, and how destructive bad results may be.

When Amazon product suggestions start dumping weird products on me because they think viewing pages is the same as showing interest in the product (vs. guffawing at weird product listings that a Twitter personality has found), the damage is limited. It's just a suggestion that I'm free to ignore. In particularly egregious scenarios, I've had to explain why weird NSFW results were showing up on my screen, but thankfully the person I'm married to trusts me.

When a voice dictation system gets the wrong words for what I am saying, fixing the problem is not hard. I can try again, or I can restart with a different modality.

In both of the previous cases, the ease of detection of false positives is simplified by the fact that I know what the end result should be. These technologies are assistive, not generative. We don't use speech recognition technology to determine what we are attempting to say, we use it to speed up getting to a predetermined outcome.

The product suggestion and dictation issues are annoying when encountering them because they are tied to an objective reality: finding products I want to buy, communicating with another person. They're only "annoying" because the mitigation is simple. Alternatively, you can just dispense with the real world entirely. When a NN "dreams" up pictures of dogs melting into a landscape, that is completely disconnected from any real thing. You can't take the hallucinated dog pictures for anything other than generative art. The purpose of the pictures is to look at the weird results and just say, "ah, that was interesting".

But facial recognition and "depixelization" fails on the first three counts, because they are attempts to reconnect the ML-generated results to a thing that exists in the real world, we don't know what the end results should be, and we (as potential users of the system) don't have any means of adjusting the output or escaping to a different system entirely. And when combined with the purpose of law enforcement, it fails on the fourth aspect, in that the modern judicial system in America is singularly optimized for prosecuting people, not determining innocence or guilt, but getting plea bargain deals out of people. Only 10% criminal cases go to trial. 99% of civil suits end in a settlement rather than a judgement (with 90% of the cases settling before ever going to trial). Even in just this case of the original article, this person and his family have been traumatized, and he has lost at least a full day of productivity, if not much, much more from the associated fallout.

When a company builds and markets a product that harms people, they should be held liable. Due to the very nature of how machine vision and learning techniques work, they'll never be able to address these problems. And the combination of failure in all four categories makes them particularly destructive.

replies(1): >>dfxm12+wh
2. dfxm12+wh[view] [source] 2020-06-24 16:53:32
>>moron4+(OP)
When a company builds and markets a product that harms people, they should be held liable.

They should be, however a company building and marketing a harmful product is a separate issue from cops using specious evidence to arrest a man.

Cops (QI aside), are responsible for the actions they take. They shouldn't be able to hide behind "the tools we use are bad", especially when (as a parent poster said), the tool is known to be bad in the first place and the cops still used it.

replies(2): >>moron4+ln >>ncalla+wB
◧◩
3. moron4+ln[view] [source] [discussion] 2020-06-24 17:18:21
>>dfxm12+wh
This is why I wrote "also", not "instead".
◧◩
4. ncalla+wB[view] [source] [discussion] 2020-06-24 18:18:22
>>dfxm12+wh
> Cops (QI aside), are responsible for the actions they take. They shouldn't be able to hide behind "the tools we use are bad", especially when (as a parent poster said), the tool is known to be bad in the first place and the cops still used it.

But literally no one in this thread is arguing to not hold them responsible.

Everyone agrees that yes, the cops and PD are responsible. It's just that some people are arguing that there are other parties that also bear responsibility.

No one thinks the cops should be able to hide behind the fact that the tool is bad. I think these cops should be fired, sued for a wrongful arrest. I think QI should be abolished so wronged party can go after the house of the officer that made the arrest in a civil court. I think the department should be on the hook for a large settlement payment.

But I also think the criminal justice system should enjoin future departments from using this known bad technology. I think we should also be mad at the technology vendors that created this bad tool.

[go to top]