A computer can make a mistake across literally any person who has a publicly available photo (which is almost everyone).
Also, the facial recognition technologies are provably extremely racially biased.
I still think it insane. We have falling crime rates and we still arm ourselves as fast as we can. Humanity could live without face recognition and we wouldn't even suffer any penalties. Nope, people need to sell their evidently shitty ML work.
Yes, of course it is. Orders of magnitude more people could be negatively and undeservedly affected this for no other reason than the fact that it's now cheap enough and easy enough to use by the authorities.
Just to give one example I came up with right now, in the future the police could stop you, take your picture and automatically have it go through its facial recognition database. Kind of like "stop and scan".
Or if the street cameras get powerful enough (and they will), they could take your picture automatically while driving and then stop you.
Think of it like a "TSA system for the roads". A lot more people will be "randomly picked" by these systems from the roads.
It's also poor practice to search a database using a photo or even DNA to go fishing for a suspect. A closest match will generally be found even if the actual perpetrator isn't in the database. I think on some level the authorities know this, which is why they dont seed the databases with their own photos and DNA.
This is the story that gets attention though. Despite it representing an improvement in likely every potential metic you can measure.
The response is what is interesting to me. It triggers a 1984 reflex resulting in people attempting to reject a dramatic enchantment in law enforcement ostensibly because it is not perfect. Or because they believe it a threat to privacy. I think people who are rejecting it should dig deep into their assumptions and reasoning to examine why they are really opposed to technology like this.
I don't think anybody actually believes that.
I'm pretty sure the exact opposite is true: People expect AI to fail, because they see it fail all the time in their daily use of computers, for example in voice recognition.
> Worse, its reported confidence for an individual face may be grossly overstated, since that is based on all the data it was trained on, rather than the particular subset you may be dealing with.
At the end of the day, this is still human error. A human compared the faces and decided they looked alike enough to go ahead. The whole thing could've happened without AI, it's just that without AI, processing large volumes of data is infeasible.
Because a false positive ruins lives? Is that not sufficient? This man’s arrest record is public and won’t disappear. Many employers won’t hire if you have an arrest record (regardless of conviction). His reputation is also permanently smeared. These records are permanently public and in fact some counties publish weekly arrest records on their websites and in newspapers (not that newspapers matter much anymore)
Someday this technology may be better and work more reliably. We’re not there yet. Right now it’s like the early days of voice recognition from the ‘90s.
The probability of finding an innocent with a similar enough face so that the witness can be fouled is much higher with AI.
I see this all the time when working with execs. I have to continually remind even very smart people with STEM undergrad and even graduate degrees that a computer vision system cannot magically see things that are invisible to the human eye.
"the computer said so" is way stronger than you would think.
(2) Your argument strikes me as somewhat similar to "I feel fine why should I keep taking my medicine?". It's not exactly the same as the medicine is scientifically proven to cure disease while it's impossible to measure the impact of police on crime. But "things are getting better so we should change what we're doing" is not a particularly sound logical argument.
Criminologists aren't certain about surveillance having a positive or negative effects on crime. We have more than 40 studies with mixed results. What is certain with that this kind of surveillance isn't responsible for the falling crime rates described. Most data is from the UK. Currently I don't think countries without surveillance fair worse on crime. Maybe quite to the contrary.
"what we're doing" is not equivalent to increasing video surveillance or generally increasing armament in civil spaces. It may be sound logic if you extend the benefit of the doubt but it may also just be a false statement.
Since surveillance is actually constitutionally forbidden in many countries, on could argue that deployment would "increase crime".
In some other sound logic it might just be a self reinforcing private prison industry with economic interests to keep a steady supply of criminals. Would also be completely sound.
But all these discussions are quite dishonest, don't you think? I just don't want your fucking camera in my face.
Human error is preferable, even if it is more frequent than the alternative, when it comes to justice. The more human the better.
Humans can be held accountable.