zlacker

[parent] [thread] 29 comments
1. zamale+(OP)[view] [source] 2020-06-25 01:58:51
52% is little better than a coin flip. If you have a million individuals in your city, your confidence should be in the ballpark of 99.9999% (1 individual in 1 million). That has really been my concern with this, the software will report any facial match above 75% confidence. Apart from the fact that it appalling confidence, no cop will pay attention to that percentage; immediately arresting or killing the individual.

Software can kill. This software can kill 50% of black people.

replies(7): >>ssss11+n1 >>cortes+14 >>dtwest+o5 >>jml7c5+vo >>loup-v+xz >>threat+eJ >>sokolo+CP
2. ssss11+n1[view] [source] 2020-06-25 02:10:47
>>zamale+(OP)
What a great comment. This encapsulates my concerns about the topic eloquently. The technology is not ready for use.
3. cortes+14[view] [source] 2020-06-25 02:37:33
>>zamale+(OP)
It would have to be even higher than that level of accuracy, because every person is going to be 'tested' multiple times.... if everyone's face is scanned 100 times a day, the number of false positives is going to be even higher.
replies(1): >>bigfud+P21
4. dtwest+o5[view] [source] 2020-06-25 02:54:23
>>zamale+(OP)
Software can kill if we put blind trust in it and give it full control over the situation. But we shouldn't do that.

Even if it was correct 99% of the time, we need to recognize that software can make mistakes. It is a tool, and people need to be responsible enough to use it correctly. I think I agree with your general idea here, but to put all of the blame on software strikes me as an incomplete assessment. Technically the software isn't killing anyone, irresponsible users of it are.

replies(9): >>ArtDev+R7 >>toofy+S7 >>czbond+Ba >>danans+Hd >>bryanr+le >>numpad+bj >>fivre+Lq >>tikima+Hb1 >>BiteCo+Pg1
◧◩
5. ArtDev+R7[view] [source] [discussion] 2020-06-25 03:21:05
>>dtwest+o5
Any software developer will tell how the marketing and sales departments will say or spin ANYTHING they can get away with to sell the product.
◧◩
6. toofy+S7[view] [source] [discussion] 2020-06-25 03:21:07
>>dtwest+o5
> Technically the software isn't killing anyone, irresponsible users of it are.

Sure, but at this point, we know how irresponsible users often are, we know this to be a an absolute fact. If the fact of user’s irresponsibility isn’t the centerpiece of our conversations, then we’re being incredibly irresponsible ourselves.

The material manifestations of how these tools will be used has to remain at the center if researchers place any value whatsoever on our ethical responsibilities.

replies(2): >>jessta+0d >>dzhiur+Re
◧◩
7. czbond+Ba[view] [source] [discussion] 2020-06-25 03:51:53
>>dtwest+o5
You articulated very well what scares me about the next 15 years.

I have written great software, yet it sometimes had bugs or un-intended consequences. I cannot imagine how I'd feel if it were to accidentally alter someones life negatively like this.

◧◩◪
8. jessta+0d[view] [source] [discussion] 2020-06-25 04:21:50
>>toofy+S7
Yep, There are so many psychology studies that show groupthink , people using statements of an authority as a way remove individual responsibility and people overriding their own perceptions to agree with an authority.

"I guess the computer got it wrong" is a terrifying thing for a police officer to say.

◧◩
9. danans+Hd[view] [source] [discussion] 2020-06-25 04:29:28
>>dtwest+o5
> Technically the software isn't killing anyone, irresponsible users of it are.

It's beyond irresponsibility - it's actively malevolent. There unfortunately are police officers, as demonstrated by recent high profile killings by police, who will use the thinnest of pretexts, like suspicion of paying with counterfeit bills, to justify the use of brutal and lethal force.

If such people are empowered by a facial recognition match, what's to stop them from similarly using that as a pretext for applying disproportionate brutality?

Even worse, a false positive match triggered arrest may be more likely to escalate to violence because the person being apprehended would be rightfully upset that they were being targeted, and appear to be resisting arrest.

replies(2): >>harlan+IX >>dtwest+Ij1
◧◩
10. bryanr+le[view] [source] [discussion] 2020-06-25 04:38:34
>>dtwest+o5
>Technically the software isn't killing anyone, irresponsible users of it are.

Irresponsible users, yes, but in the users who are using the software as it was marketed for use.

◧◩◪
11. dzhiur+Re[view] [source] [discussion] 2020-06-25 04:44:20
>>toofy+S7
Software flies rockets, planes, ships, cars, factories and just about everything else. Yet somehow LE shouldn't be using it because... they are dumb? Everyone else is smart tho.
replies(3): >>toofy+Wg >>zumina+Vk >>fivre+lr
◧◩◪◨
12. toofy+Wg[view] [source] [discussion] 2020-06-25 05:09:19
>>dzhiur+Re
Did you respond to the wrong comment? I don’t believe I implied anything close to what you just said.
◧◩
13. numpad+bj[view] [source] [discussion] 2020-06-25 05:38:03
>>dtwest+o5
Maybe software is more like laws, judges generally aren’t guilty of issuing death sentences
◧◩◪◨
14. zumina+Vk[view] [source] [discussion] 2020-06-25 06:04:44
>>dzhiur+Re
If you fly a plane, drive a car or operate a factory, your livelihood and often your life depends on your constantly paying attention to the output of the software and making constant course-correcting adjustments if necessary. And the software itself often has the ability to avoid fatal errors built in. You rely on it in a narrow domain because it is highly reliable within that domain. For example, your vehicle's cruise control will generally not suddenly brake and swerve off the road so you can relax your levels of concentration to some extent. If it were only 52% likely to be maintaining your velocity and heading from moment to moment, you wouldn't trust it for a second.

Facial recognition software doesn't have the level of reliability that control software for mechanical systems has. And if a mistake is made, the consequences to the LEO have been historically minimal. Shoot first and ask questions later has been deemed acceptable conduct, so why not implicitly trust in the software? If it's right and you kill a terrorist, you're a hero. If it's wrong and you kill a civilian, the US Supreme Court has stated, "Where the officer has probable cause to believe that the suspect poses a threat of serious physical harm, either to the officer or to others, it is not constitutionally unreasonable to prevent escape by using deadly force." The software provides probable cause, the subject's life is thereby forfeit. From the perspective of the officer, seems a no-brainer.

15. jml7c5+vo[view] [source] 2020-06-25 06:44:30
>>zamale+(OP)
Just to be clear, parent is describing fictional software, not the system in the article. You seem to be conflating the two.
replies(1): >>TheSpi+3v
◧◩
16. fivre+Lq[view] [source] [discussion] 2020-06-25 07:10:59
>>dtwest+o5
> Software can kill if we put blind trust in it and give it full control over the situation. But we shouldn't do that.

Do you work in a commercial software firm? Have you ever seen your salespeople talk with their customer contacts?

The salespeople and marketing departments at the firms that make this technology and target law enforcement markets are, 100%, full stop, absolutely making claims that you can trust the software to have full control over the situation, and you, the customer, should not worry about whether the software should or should not have that control.

Being able to use something "irresponsibly" and disclaim responsibility because AI made the decision is. a. selling. point. Prospective customers want. to. give. up. that. authority. and. that. responsibility.

Making the sort of decisions we ask this shit to make is hard, if you're a human, because it's emotionally weighty and fraught with doubt, and it should be, because the consequences of making the wrong decision are horrific. But if you're a machine, it's not so hard, because we didn't teach the machines to care about anything other than succeeding at clearly-defined tasks.

It's very easy to make the argument that the machines can't do much more, because that argument is correct given what tech we have currently. But that's not how the tech is sold--it becomes a miracle worker, a magician, because that's what it looks like to laypeople who don't understand that it's just a bunch of linear algebra cobbled together into something that can decide a well-defined question. Nobody's buying a lump of linear algebra, but many people are quite willing to buy a magical, infallible oracle that removes stressful, difficult decisions from their work, especially in the name of doing good.

tl;dr capitalism is a fuck. we can pontificate about the ethical use of the Satan's toys as much as we like; all that banter doesn't matter much when they're successfully sold as God's righteous sword.

◧◩◪◨
17. fivre+lr[view] [source] [discussion] 2020-06-25 07:18:58
>>dzhiur+Re
Were you asleep for all coverage of the 737 MAX MCAS, or the technical failures that contributed to multiple warships casually driving into other ships?

https://features.propublica.org/navy-accidents/uss-fitzgeral...

https://features.propublica.org/navy-uss-mccain-crash/navy-i...

Software allows us to work very efficiently because it can speed work up. It can speed us up when fucking things up just as well.

replies(1): >>dzhiur+uG
◧◩
18. TheSpi+3v[view] [source] [discussion] 2020-06-25 07:54:55
>>jml7c5+vo
Amusing.

At this point facial recognition is fictional software.

19. loup-v+xz[view] [source] 2020-06-25 08:34:31
>>zamale+(OP)
I would hope that the software/ML engineers who wrote it knows about probability theory, and why the prior probability should be set at 0.0001% or so.

So that if we print 52% on the screen, that means we've already gathered like 30-bits of evidence (30 coin flips all coming up heads), at which point the suspicion would be real.

◧◩◪◨⬒
20. dzhiur+uG[view] [source] [discussion] 2020-06-25 09:40:29
>>fivre+lr
Airbus has been fly-by-wire for something like 5 decades. They did have some issues but they were solved. So will be 737.
21. threat+eJ[view] [source] 2020-06-25 10:08:18
>>zamale+(OP)
> 99.9999%

Then the justice system would implode. Judicial policy is "software" too, and nobody holds the judiciary or police to that absurd level of excellence, even if we're talking about the death penalty.

replies(2): >>hrktb+s11 >>filoel+vO2
22. sokolo+CP[view] [source] 2020-06-25 11:12:42
>>zamale+(OP)
It’s not at all obvious to me that the accuracy threshold should scale with city size. Some small town shouldn’t use a system that is 1000x less accurate.
◧◩◪
23. harlan+IX[view] [source] [discussion] 2020-06-25 12:18:34
>>danans+Hd
A former employer recently got a fraudulent restraining order against me. I’m homeless and encounter the police all the time. I consider it a probable contributing factor to my death, which they are almost certainly pleased about. Nobody in any way has ever seen me as violent, but now I am in a national “workplace violence” protection order database, aka. violent and/or unstable. I am homeless and would rather continue my career than fight it. It seems like it could make people with less to lose turn violent. I feel anger and disappointment like never before. (OpenTable is the company, their engineering leadership are the drivers of this).
◧◩
24. hrktb+s11[view] [source] [discussion] 2020-06-25 12:44:23
>>threat+eJ
> even if we're talking about the death penalty.

And that's also the core argument why some countries abolished death penalty.

◧◩
25. bigfud+P21[view] [source] [discussion] 2020-06-25 12:51:42
>>cortes+14
We shouldn’t assume those tests and errors are independent, they probably aren’t, but you are right that the overall error rate would be inflated.
◧◩
26. tikima+Hb1[view] [source] [discussion] 2020-06-25 13:43:38
>>dtwest+o5
The major problem with any solution we have to contend with is the fact that the ratio of appropriate to inappropriate police interactions is unlikely to change regardless of the system or official procedure, so any system that increases the number of police interactions must therefore increase the number of inappropriate police interactions.

Consider that not everyone understands how machine learning, and specifically classifier algorithms work. When a police officer is told the confidence level is above 75% he's going to think that's a low chance of being wrong. He does not have the background in math to realize that given a large enough real population size being classified via facial recognition, a 75% confidence level is utterly useless.

The reported 75% confidence level is only valid when scanning a population size that is at most as large as the training data set's. However, we have no way of decreasing that confidence level to be accurate when comparing against the real world population size of an area without simply making the entire real population the training set. And none of that takes circumstances like low light level or lens distortion into account. The real confidence of a match after accounting for those factors would put nearly all real world use cases below 10%.

Now imagine that the same cop you have to explain this to has already been sold this system by people who work in sales and marketing. Any expectation that ALL police officers will correctly assess the systems results and behave accordingly fails to recognize that cops are human, and above all, cops are not mathematicians or data scientists. Perhaps there are processes to give police officers actionable information and training that would normally avoid problems, but all it takes is one cop getting emotional about one possible match for any carefully designed system to fail.

Again, the frequency of cops getting emotional or simply deciding that even a 10% possibility that someone they are about to question might be dangerous is too high a risk, is unlikely to change. So,providing them a system which increases their number of actionable leads and therefore interactions with the public can only increase the number incidents where police end up brutalizing or even killing someone innocent.

◧◩
27. BiteCo+Pg1[view] [source] [discussion] 2020-06-25 14:12:27
>>dtwest+o5
> But we shouldn't do that.

The average human sucks at understanding probabilities.

Until we can prove that most people handling this system are capabable of smart decision making, which the latest police scandals do not lead to believe right now, those systems should not be used.

◧◩◪
28. dtwest+Ij1[view] [source] [discussion] 2020-06-25 14:28:13
>>danans+Hd
My point was that this technology should not be used as evidence, and should not be grounds to take any forceful action against someone. If a cop abuses this, it is the cop's fault and we should hold them accountable. If the cop acted ignorantly because they were lied to by marketers, their boss, or a software company, those parties should be held accountable as well.

If your strategy is to get rid of all pretexts for police action, I don't think that is the right one. Instead we need to set a high standard of conduct and make sure it is upheld. If you don't understand a tool, don't use it. If you do something horrible while using a tool you don't understand, it is negligent/irresponsible/maybe even malevolent, because it was your responsibility to understand it before using it.

A weatherman saying there is a 90% chance of rain is not evidence that it rained. And I understand the fear that a prediction can be abused, and we need to make sure it isn't abused. But abolishing the weatherman isn't the way to do it.

replies(1): >>danans+0s1
◧◩◪◨
29. danans+0s1[view] [source] [discussion] 2020-06-25 15:13:47
>>dtwest+Ij1
> If your strategy is to get rid of all pretexts for police action, I don't think that is the right one.

Not at all.

> Instead we need to set a high standard of conduct and make sure it is upheld

Yes, but we should be real about what this means. The institution of law enforcement is rotten, which is why it protects bad actors to such a degree. It needs to be cleaved from its racist history and be rebuilt nearly from the ground up. Better training in interpreting results from an ML model won't be enough by a long shot.

◧◩
30. filoel+vO2[view] [source] [discussion] 2020-06-25 23:19:25
>>threat+eJ
The justice system would implode if half the innocent people strong-armed into taking plea deals (with threats of much harsher sentences if they go to court) chose not to take them. That “software” is already buggy AF and needs some fundamental fixes. Setting a high standard for some crazy new AI stuff is a smaller change than fixing what’s already broken.
[go to top]