zlacker

[parent] [thread] 3 comments
1. danans+(OP)[view] [source] 2020-06-25 04:29:28
> Technically the software isn't killing anyone, irresponsible users of it are.

It's beyond irresponsibility - it's actively malevolent. There unfortunately are police officers, as demonstrated by recent high profile killings by police, who will use the thinnest of pretexts, like suspicion of paying with counterfeit bills, to justify the use of brutal and lethal force.

If such people are empowered by a facial recognition match, what's to stop them from similarly using that as a pretext for applying disproportionate brutality?

Even worse, a false positive match triggered arrest may be more likely to escalate to violence because the person being apprehended would be rightfully upset that they were being targeted, and appear to be resisting arrest.

replies(2): >>harlan+1K >>dtwest+161
2. harlan+1K[view] [source] 2020-06-25 12:18:34
>>danans+(OP)
A former employer recently got a fraudulent restraining order against me. I’m homeless and encounter the police all the time. I consider it a probable contributing factor to my death, which they are almost certainly pleased about. Nobody in any way has ever seen me as violent, but now I am in a national “workplace violence” protection order database, aka. violent and/or unstable. I am homeless and would rather continue my career than fight it. It seems like it could make people with less to lose turn violent. I feel anger and disappointment like never before. (OpenTable is the company, their engineering leadership are the drivers of this).
3. dtwest+161[view] [source] 2020-06-25 14:28:13
>>danans+(OP)
My point was that this technology should not be used as evidence, and should not be grounds to take any forceful action against someone. If a cop abuses this, it is the cop's fault and we should hold them accountable. If the cop acted ignorantly because they were lied to by marketers, their boss, or a software company, those parties should be held accountable as well.

If your strategy is to get rid of all pretexts for police action, I don't think that is the right one. Instead we need to set a high standard of conduct and make sure it is upheld. If you don't understand a tool, don't use it. If you do something horrible while using a tool you don't understand, it is negligent/irresponsible/maybe even malevolent, because it was your responsibility to understand it before using it.

A weatherman saying there is a 90% chance of rain is not evidence that it rained. And I understand the fear that a prediction can be abused, and we need to make sure it isn't abused. But abolishing the weatherman isn't the way to do it.

replies(1): >>danans+je1
◧◩
4. danans+je1[view] [source] [discussion] 2020-06-25 15:13:47
>>dtwest+161
> If your strategy is to get rid of all pretexts for police action, I don't think that is the right one.

Not at all.

> Instead we need to set a high standard of conduct and make sure it is upheld

Yes, but we should be real about what this means. The institution of law enforcement is rotten, which is why it protects bad actors to such a degree. It needs to be cleaved from its racist history and be rebuilt nearly from the ground up. Better training in interpreting results from an ML model won't be enough by a long shot.

[go to top]