zlacker

[return to "Facial Recognition Leads To False Arrest Of Black Man In Detroit"]
1. danso+02[view] [source] 2020-06-24 14:55:32
>>vermon+(OP)
This story is really alarming because as described, the police ran a face recognition tool based on a frame of grainy security footage and got a positive hit. Does this tool give any indication of a confidence value? Does it return a list (sorted by confidence) of possible suspects, or any other kind of feedback that would indicate even to a layperson how much uncertainty there is?

The issue of face recognition algorithms performing worse on dark faces is a major problem. But the other side of it is: would police be more hesitant to act on such fuzzy evidence if the top match appeared to be a middle-class Caucasian (i.e. someone who is more likely to take legal recourse)?

◧◩
2. Pxtl+23[view] [source] 2020-06-24 15:00:13
>>danso+02
Intresting and related, a team made a neat "face depixelizer" that takes a pixelated image and uses machine learning to generate a face that should match the pixelated image.

What's hilarious is that it makes faces that look nothing like the original high-resolution images.

https://twitter.com/Chicken3gg/status/1274314622447820801

◧◩◪
3. mywitt+s3[view] [source] 2020-06-24 15:02:17
>>Pxtl+23
I wonder if this is trained on the same, or similar, datasets.
◧◩◪◨
4. jcims+sA1[view] [source] 2020-06-24 22:18:01
>>mywitt+s3
One of the underlying models, PULSE, was trained on CelebAHQ, which is likely what the results are mostly white-looking. StyleGAN, which was trained on the much more diverse (but sparse) FFHQ dataset does come up with a much more diverse set of faces[1]...but PULSE couldn't get them to converge very closely on the pixelated subjects...so they went with CelebA [2].

[1] https://github.com/NVlabs/stylegan [2] https://arxiv.org/pdf/2003.03808.pdf (ctrl+f ffhq)

[go to top]