Of course really I blame the AI/ML hucksters for part of this mess who have sold us the idea of machines replacing rather than augmenting human decision making.
But no, its AI, its magical and it must be right.
At what point can we decide that people in positions of power are not and will not ever be responsible enough to handle this technology?
Surely as a society we shouldn’t continue to naively assume that police are “responsible” like we’ve assumed in the past?
Of course we shouldn't assume it, but we absolutely should require it.
Uncertainty is a core part of policing which can't be removed.
Having a picture or just a description of the face is one of the most important pieces of information the police has in order to do actual policing. You can be arrested for just broadly matching the description if you happen to be in the vicinity.
Had the guy been convicted of anything just based on that evidence, this would be a scandal. As it is, a suspect is just a suspect and this kind of thing happens all the time, because humans are just as fallible. It's just not news when there's no AI involved.
The current best practice is to have a witness pick out the suspect from 6 photos. It should be immediately obvious that right off the bat there's a 17% chance of the witness randomly picking the "right" person. It's a terrible way to do things and it's no surprise that people are wrongly convicted again and again on eyewitness testimony.
> Authorities said he was not carrying identification at the time of his arrest and was not cooperating. … an issue with the fingerprint machine ultimately made it difficult to identify the suspect, … A source said officials used facial recognition technology to confirm his identity.
https://en.wikipedia.org/wiki/Capital_Gazette_shooting#Suspe...
> Police, who arrived at the scene within a minute of the reported gunfire, apprehended a gunman found hiding under a desk in the newsroom, according to the top official in Anne Arundel County, where the attack occurred.
https://www.washingtonpost.com/local/public-safety/heavy-pol...
This doesn't really seem like an awesome use of facial recognition to me. He was already in custody after getting picked up at the crime scene. I doubt he would have been released if facial recognition didn't exist.
The technology is certainly not robust enough to be trusted to work correctly at that level yet. Even if it was improved I think there is a huge moral issue with the police having the power to use it indiscriminately on the street.
Faces generated by AI means should not count as 'probable cause' to go and arrest people. They should count as fantasy.
They don't:
https://wfdd-live.s3.amazonaws.com/styles/story-full/s3/imag...
There was further work involved, there was a witness who identified the man on a photo lineup, and so on. The AI did not identify anyone, it gave a "best effort" match. All the actual mistakes were made by humans.