zlacker

[return to "Facial Recognition Leads To False Arrest Of Black Man In Detroit"]
1. ibudia+bP1[view] [source] 2020-06-25 00:21:16
>>vermon+(OP)
Here is a part that I personally have to wrestle with:

> "They never even asked him any questions before arresting him. They never asked him if he had an alibi. They never asked if he had a red Cardinals hat. They never asked him where he was that day," said lawyer Phil Mayor with the ACLU of Michigan.

When I was fired by an automated system, no one asked if I had done something wrong. They asked me to leave. If they had just checked his alibi, he would have been cleared. But the machine said it was him, so case closed.

Not too long ago, I wrote a comment here about this [1]:

> The trouble is not that the AI can be wrong, it's that we will rely on its answers to make decisions.

> When the facial recognition software combines your facial expression and your name, while you are walking under the bridge late at night, in an unfamiliar neighborhood, and you are black; your terrorist score is at 52%. A police car is dispatched.

Most of us here can be excited about Facial Recognition technology but still know that it's not something to be deployed in the field. It's by no means ready. We might even consider the moral ethics before building it as a toy.

But that's not how it is being sold to law enforcement or other entities. It's _Reduce crime in your cities. Catch criminals in ways never thought possible. Catch terrorists before they blow up anything._ It is sold as an ultimate decision maker.

[1]:https://news.ycombinator.com/item?id=21339530

◧◩
2. zamale+OZ1[view] [source] 2020-06-25 01:58:51
>>ibudia+bP1
52% is little better than a coin flip. If you have a million individuals in your city, your confidence should be in the ballpark of 99.9999% (1 individual in 1 million). That has really been my concern with this, the software will report any facial match above 75% confidence. Apart from the fact that it appalling confidence, no cop will pay attention to that percentage; immediately arresting or killing the individual.

Software can kill. This software can kill 50% of black people.

◧◩◪
3. dtwest+c52[view] [source] 2020-06-25 02:54:23
>>zamale+OZ1
Software can kill if we put blind trust in it and give it full control over the situation. But we shouldn't do that.

Even if it was correct 99% of the time, we need to recognize that software can make mistakes. It is a tool, and people need to be responsible enough to use it correctly. I think I agree with your general idea here, but to put all of the blame on software strikes me as an incomplete assessment. Technically the software isn't killing anyone, irresponsible users of it are.

◧◩◪◨
4. toofy+G72[view] [source] 2020-06-25 03:21:07
>>dtwest+c52
> Technically the software isn't killing anyone, irresponsible users of it are.

Sure, but at this point, we know how irresponsible users often are, we know this to be a an absolute fact. If the fact of user’s irresponsibility isn’t the centerpiece of our conversations, then we’re being incredibly irresponsible ourselves.

The material manifestations of how these tools will be used has to remain at the center if researchers place any value whatsoever on our ethical responsibilities.

◧◩◪◨⬒
5. dzhiur+Fe2[view] [source] 2020-06-25 04:44:20
>>toofy+G72
Software flies rockets, planes, ships, cars, factories and just about everything else. Yet somehow LE shouldn't be using it because... they are dumb? Everyone else is smart tho.
◧◩◪◨⬒⬓
6. zumina+Jk2[view] [source] 2020-06-25 06:04:44
>>dzhiur+Fe2
If you fly a plane, drive a car or operate a factory, your livelihood and often your life depends on your constantly paying attention to the output of the software and making constant course-correcting adjustments if necessary. And the software itself often has the ability to avoid fatal errors built in. You rely on it in a narrow domain because it is highly reliable within that domain. For example, your vehicle's cruise control will generally not suddenly brake and swerve off the road so you can relax your levels of concentration to some extent. If it were only 52% likely to be maintaining your velocity and heading from moment to moment, you wouldn't trust it for a second.

Facial recognition software doesn't have the level of reliability that control software for mechanical systems has. And if a mistake is made, the consequences to the LEO have been historically minimal. Shoot first and ask questions later has been deemed acceptable conduct, so why not implicitly trust in the software? If it's right and you kill a terrorist, you're a hero. If it's wrong and you kill a civilian, the US Supreme Court has stated, "Where the officer has probable cause to believe that the suspect poses a threat of serious physical harm, either to the officer or to others, it is not constitutionally unreasonable to prevent escape by using deadly force." The software provides probable cause, the subject's life is thereby forfeit. From the perspective of the officer, seems a no-brainer.

[go to top]