Even if it was correct 99% of the time, we need to recognize that software can make mistakes. It is a tool, and people need to be responsible enough to use it correctly. I think I agree with your general idea here, but to put all of the blame on software strikes me as an incomplete assessment. Technically the software isn't killing anyone, irresponsible users of it are.
Sure, but at this point, we know how irresponsible users often are, we know this to be a an absolute fact. If the fact of user’s irresponsibility isn’t the centerpiece of our conversations, then we’re being incredibly irresponsible ourselves.
The material manifestations of how these tools will be used has to remain at the center if researchers place any value whatsoever on our ethical responsibilities.
I have written great software, yet it sometimes had bugs or un-intended consequences. I cannot imagine how I'd feel if it were to accidentally alter someones life negatively like this.
"I guess the computer got it wrong" is a terrifying thing for a police officer to say.
It's beyond irresponsibility - it's actively malevolent. There unfortunately are police officers, as demonstrated by recent high profile killings by police, who will use the thinnest of pretexts, like suspicion of paying with counterfeit bills, to justify the use of brutal and lethal force.
If such people are empowered by a facial recognition match, what's to stop them from similarly using that as a pretext for applying disproportionate brutality?
Even worse, a false positive match triggered arrest may be more likely to escalate to violence because the person being apprehended would be rightfully upset that they were being targeted, and appear to be resisting arrest.
Irresponsible users, yes, but in the users who are using the software as it was marketed for use.
Facial recognition software doesn't have the level of reliability that control software for mechanical systems has. And if a mistake is made, the consequences to the LEO have been historically minimal. Shoot first and ask questions later has been deemed acceptable conduct, so why not implicitly trust in the software? If it's right and you kill a terrorist, you're a hero. If it's wrong and you kill a civilian, the US Supreme Court has stated, "Where the officer has probable cause to believe that the suspect poses a threat of serious physical harm, either to the officer or to others, it is not constitutionally unreasonable to prevent escape by using deadly force." The software provides probable cause, the subject's life is thereby forfeit. From the perspective of the officer, seems a no-brainer.
Do you work in a commercial software firm? Have you ever seen your salespeople talk with their customer contacts?
The salespeople and marketing departments at the firms that make this technology and target law enforcement markets are, 100%, full stop, absolutely making claims that you can trust the software to have full control over the situation, and you, the customer, should not worry about whether the software should or should not have that control.
Being able to use something "irresponsibly" and disclaim responsibility because AI made the decision is. a. selling. point. Prospective customers want. to. give. up. that. authority. and. that. responsibility.
Making the sort of decisions we ask this shit to make is hard, if you're a human, because it's emotionally weighty and fraught with doubt, and it should be, because the consequences of making the wrong decision are horrific. But if you're a machine, it's not so hard, because we didn't teach the machines to care about anything other than succeeding at clearly-defined tasks.
It's very easy to make the argument that the machines can't do much more, because that argument is correct given what tech we have currently. But that's not how the tech is sold--it becomes a miracle worker, a magician, because that's what it looks like to laypeople who don't understand that it's just a bunch of linear algebra cobbled together into something that can decide a well-defined question. Nobody's buying a lump of linear algebra, but many people are quite willing to buy a magical, infallible oracle that removes stressful, difficult decisions from their work, especially in the name of doing good.
tl;dr capitalism is a fuck. we can pontificate about the ethical use of the Satan's toys as much as we like; all that banter doesn't matter much when they're successfully sold as God's righteous sword.
https://features.propublica.org/navy-accidents/uss-fitzgeral...
https://features.propublica.org/navy-uss-mccain-crash/navy-i...
Software allows us to work very efficiently because it can speed work up. It can speed us up when fucking things up just as well.
Consider that not everyone understands how machine learning, and specifically classifier algorithms work. When a police officer is told the confidence level is above 75% he's going to think that's a low chance of being wrong. He does not have the background in math to realize that given a large enough real population size being classified via facial recognition, a 75% confidence level is utterly useless.
The reported 75% confidence level is only valid when scanning a population size that is at most as large as the training data set's. However, we have no way of decreasing that confidence level to be accurate when comparing against the real world population size of an area without simply making the entire real population the training set. And none of that takes circumstances like low light level or lens distortion into account. The real confidence of a match after accounting for those factors would put nearly all real world use cases below 10%.
Now imagine that the same cop you have to explain this to has already been sold this system by people who work in sales and marketing. Any expectation that ALL police officers will correctly assess the systems results and behave accordingly fails to recognize that cops are human, and above all, cops are not mathematicians or data scientists. Perhaps there are processes to give police officers actionable information and training that would normally avoid problems, but all it takes is one cop getting emotional about one possible match for any carefully designed system to fail.
Again, the frequency of cops getting emotional or simply deciding that even a 10% possibility that someone they are about to question might be dangerous is too high a risk, is unlikely to change. So,providing them a system which increases their number of actionable leads and therefore interactions with the public can only increase the number incidents where police end up brutalizing or even killing someone innocent.
The average human sucks at understanding probabilities.
Until we can prove that most people handling this system are capabable of smart decision making, which the latest police scandals do not lead to believe right now, those systems should not be used.
If your strategy is to get rid of all pretexts for police action, I don't think that is the right one. Instead we need to set a high standard of conduct and make sure it is upheld. If you don't understand a tool, don't use it. If you do something horrible while using a tool you don't understand, it is negligent/irresponsible/maybe even malevolent, because it was your responsibility to understand it before using it.
A weatherman saying there is a 90% chance of rain is not evidence that it rained. And I understand the fear that a prediction can be abused, and we need to make sure it isn't abused. But abolishing the weatherman isn't the way to do it.
Not at all.
> Instead we need to set a high standard of conduct and make sure it is upheld
Yes, but we should be real about what this means. The institution of law enforcement is rotten, which is why it protects bad actors to such a degree. It needs to be cleaved from its racist history and be rebuilt nearly from the ground up. Better training in interpreting results from an ML model won't be enough by a long shot.