zlacker

[parent] [thread] 15 comments
1. hpoe+(OP)[view] [source] 2020-06-24 14:55:21
I don't think using the facial recognition is necessarily wrong to help identify probable suspects, but arresting someone based on a facial match algorithm is definitely going too far.

Of course really I blame the AI/ML hucksters for part of this mess who have sold us the idea of machines replacing rather than augmenting human decision making.

replies(4): >>jordan+B >>jacque+r2 >>dafoex+i3 >>dx87+j3
2. jordan+B[view] [source] 2020-06-24 14:57:28
>>hpoe+(OP)
Those hucksters should be worried about the Supreme Court swatting away their business model, because that's where I see this headed.
replies(1): >>hnlmor+B1
◧◩
3. hnlmor+B1[view] [source] [discussion] 2020-06-24 15:02:21
>>jordan+B
I don't think they'll worry about that. Even if that did happen there are foreign markets who would still invest in this.
4. jacque+r2[view] [source] 2020-06-24 15:06:17
>>hpoe+(OP)
I think it is very wrong. Faces are anything but unique. Having a particular face should not result in you being a suspect. Only once actual policing results in you becoming a suspect then this might be a low quality extra signal.
replies(1): >>gridlo+Xd
5. dafoex+i3[view] [source] 2020-06-24 15:09:22
>>hpoe+(OP)
In a world where some police forces don't use polygraph lie detectors because they are deemed too inaccurate, it baffles me that people would make an arrest based on a facial recognition hit from poor quality data.

But no, its AI, its magical and it must be right.

replies(1): >>treis+9v
6. dx87+j3[view] [source] 2020-06-24 15:09:23
>>hpoe+(OP)
Yeah, facial recognition can be useful in law enforcement, as long as it's used responsibly. There was a man who shot people at a newspaper where I lived, and when apprehended, he refused to identify himself, and apparently their fingerprint machine wasn't working, so they used facial recognition to identify him.

https://en.wikipedia.org/wiki/Capital_Gazette_shooting

replies(3): >>YPCrum+Q5 >>rovolo+tX >>glenda+k81
◧◩
7. YPCrum+Q5[view] [source] [discussion] 2020-06-24 15:19:43
>>dx87+j3
> as long as it’s used responsibly

At what point can we decide that people in positions of power are not and will not ever be responsible enough to handle this technology?

Surely as a society we shouldn’t continue to naively assume that police are “responsible” like we’ve assumed in the past?

replies(2): >>anthon+3d >>dx87+qL
◧◩◪
8. anthon+3d[view] [source] [discussion] 2020-06-24 15:47:55
>>YPCrum+Q5
> Surely as a society we shouldn’t continue to naively assume that police are “responsible” like we’ve assumed in the past?

Of course we shouldn't assume it, but we absolutely should require it.

Uncertainty is a core part of policing which can't be removed.

◧◩
9. gridlo+Xd[view] [source] [discussion] 2020-06-24 15:51:25
>>jacque+r2
> Having a particular face should not result in you being a suspect. Only once actual policing results in you becoming a suspect then this might be a low quality extra signal.

Having a picture or just a description of the face is one of the most important pieces of information the police has in order to do actual policing. You can be arrested for just broadly matching the description if you happen to be in the vicinity.

Had the guy been convicted of anything just based on that evidence, this would be a scandal. As it is, a suspect is just a suspect and this kind of thing happens all the time, because humans are just as fallible. It's just not news when there's no AI involved.

replies(1): >>jacque+Mo1
◧◩
10. treis+9v[view] [source] [discussion] 2020-06-24 16:49:03
>>dafoex+i3
This seems similar to self-driving cars where people hold the computer to much higher standards than humans. I don't have solid proof, but I suspect that using facial recognition with a reasonable confidence threshold and reasonable source images is more accurate than eyewitness ID. If for no other reason than the threshold for a positive eyewitness ID is laughably bad.

The current best practice is to have a witness pick out the suspect from 6 photos. It should be immediately obvious that right off the bat there's a 17% chance of the witness randomly picking the "right" person. It's a terrible way to do things and it's no surprise that people are wrongly convicted again and again on eyewitness testimony.

replies(1): >>TheCoe+Fn3
◧◩◪
11. dx87+qL[view] [source] [discussion] 2020-06-24 17:56:55
>>YPCrum+Q5
Agreed, I'm not saying we can currently assume they are responsible, but in some hypothetical future where reforms have been made and they can be trusted, I think it would be fine to use. I don't think we should use current bad actors to decide that a technology is completely off limits in the future.
◧◩
12. rovolo+tX[view] [source] [discussion] 2020-06-24 18:49:24
>>dx87+j3
From the wiki article and the linked news articles, the police picked him up at the scene of the crime. He also had smoke grenades (used in the attack) when they found him.

> Authorities said he was not carrying identification at the time of his arrest and was not cooperating. … an issue with the fingerprint machine ultimately made it difficult to identify the suspect, … A source said officials used facial recognition technology to confirm his identity.

https://en.wikipedia.org/wiki/Capital_Gazette_shooting#Suspe...

> Police, who arrived at the scene within a minute of the reported gunfire, apprehended a gunman found hiding under a desk in the newsroom, according to the top official in Anne Arundel County, where the attack occurred.

https://www.washingtonpost.com/local/public-safety/heavy-pol...

This doesn't really seem like an awesome use of facial recognition to me. He was already in custody after getting picked up at the crime scene. I doubt he would have been released if facial recognition didn't exist.

◧◩
13. glenda+k81[view] [source] [discussion] 2020-06-24 19:38:29
>>dx87+j3
I don't think there is such a thing as responsible use of facial recognition technology by law enforcement.

The technology is certainly not robust enough to be trusted to work correctly at that level yet. Even if it was improved I think there is a huge moral issue with the police having the power to use it indiscriminately on the street.

◧◩◪
14. jacque+Mo1[view] [source] [discussion] 2020-06-24 21:07:16
>>gridlo+Xd
A face of which there is only a description is not going to work if there aren't any special identifying marks unless you get an artist involved or one of those identikit sets to reconstruct the face. An AI is just going to spit out some generic representation of what it was trained on rather than the specifics of the face of an actual suspect.

Faces generated by AI means should not count as 'probable cause' to go and arrest people. They should count as fantasy.

replies(1): >>gridlo+lt1
◧◩◪◨
15. gridlo+lt1[view] [source] [discussion] 2020-06-24 21:38:40
>>jacque+Mo1
> Faces generated by AI means should not count as 'probable cause' to go and arrest people.

They don't:

https://wfdd-live.s3.amazonaws.com/styles/story-full/s3/imag...

There was further work involved, there was a witness who identified the man on a photo lineup, and so on. The AI did not identify anyone, it gave a "best effort" match. All the actual mistakes were made by humans.

◧◩◪
16. TheCoe+Fn3[view] [source] [discussion] 2020-06-25 15:00:20
>>treis+9v
You need a much higher standard standard of accuracy for facial recognition because it is applied indiscriminately to a large population. If it has 99.9% accuracy and you apply it to a population of 10,000 people, you will get on average 10 false positives.
[go to top]