> "They never even asked him any questions before arresting him. They never asked him if he had an alibi. They never asked if he had a red Cardinals hat. They never asked him where he was that day," said lawyer Phil Mayor with the ACLU of Michigan.
When I was fired by an automated system, no one asked if I had done something wrong. They asked me to leave. If they had just checked his alibi, he would have been cleared. But the machine said it was him, so case closed.
Not too long ago, I wrote a comment here about this [1]:
> The trouble is not that the AI can be wrong, it's that we will rely on its answers to make decisions.
> When the facial recognition software combines your facial expression and your name, while you are walking under the bridge late at night, in an unfamiliar neighborhood, and you are black; your terrorist score is at 52%. A police car is dispatched.
Most of us here can be excited about Facial Recognition technology but still know that it's not something to be deployed in the field. It's by no means ready. We might even consider the moral ethics before building it as a toy.
But that's not how it is being sold to law enforcement or other entities. It's _Reduce crime in your cities. Catch criminals in ways never thought possible. Catch terrorists before they blow up anything._ It is sold as an ultimate decision maker.
Exactly what I thought when I've read about this. It's not like humans are great at matching faces either. In fact machines have been better at facial recognition for over a decade now. I bet there are hundreds of people (of all races) in prison right now who are there simply because they were mis-identified by a human. Human memory, even in the absence of bias and prejudice, is pretty fallible.
There is a notion of "mixture of experts" in machine learning. It's when you have two or more models that are not, by themselves, sufficiently good to make a robust prediction, but that make different kinds of mistakes, and you use the consensus estimate. The resulting estimate will be better than any model in isolation. The same should be done here - AI should be merely a signal, it is not a replacement for detective work, and what's described in the article is just bad policing. AI has very little to do with that.