zlacker

[parent] [thread] 61 comments
1. Ansil8+(OP)[view] [source] 2020-05-31 18:06:15
Some tips to maximise user privacy while deploying this tool:

1) The code, for now, runs locally. This is good. To avoid the possibility of the code being tampered with at a later day (for example, it could be modified to send copies of the image to a server), download the webpage and use the saved copy, not the live copy.

2) Do not use the blur functionality. For maximum privacy, this should be removed from the app entirely. There are _a lot_ of forensic methods to reverse blur techniques.

3) Be weary of other things in the photograph that might identify someone: reflections, shadows, so on.

4) Really a subset of 2 and 3, but be aware that blocking out faces is often times not sufficient to anonymise the subject in the photo. Identifying marks like tattoos, or even something as basic as the shoes they are wearing, can be used to identify the target.

replies(4): >>Nightl+l >>samsta+ef >>_bxg1+Em >>derhue+Pz
2. Nightl+l[view] [source] 2020-05-31 18:09:34
>>Ansil8+(OP)
"There are _a lot_ of forensic methods to reverse blur techniques"

Any examples? You can't reverse it if the data is gone.

replies(8): >>forgot+Q >>chriss+x1 >>coopsm+V4 >>adrian+c6 >>fragme+x6 >>norriu+Va >>dahart+qe >>sly010+Sf
◧◩
3. forgot+Q[view] [source] [discussion] 2020-05-31 18:14:05
>>Nightl+l
The data may still be there, it just looks like it's gone.
replies(1): >>okamiu+J1
◧◩
4. chriss+x1[view] [source] [discussion] 2020-05-31 18:19:20
>>Nightl+l
> You can't reverse it if the data is gone.

That's the problem - the data you think is gone isn't gone. High frequencies are gone.... but you left all the low frequencies, didn't you? You can read a face from the low frequencies.

replies(2): >>pbhjpb+oe >>steera+bg1
◧◩◪
5. okamiu+J1[view] [source] [discussion] 2020-05-31 18:20:49
>>forgot+Q
Blur is in effect a lowpass filter on the image. The high frequency information is gone. Reconstruction based on domain knowledge, like AI methods etc is unlikely to be able to reconstruct the distinguishing features between people enough to avoid false positives when used to search for similar people.

Then again, maybe groups of people can be associated together, and a poor match is good enough given other clues.

So, much better to be safe than sorry.

I'm not sure if I had a particular good point to make, other than that blurring does remove information that cannot easily be reversed. You can probably make very convincing reconstructions, but they might not look like the original person.

replies(3): >>radars+u5 >>thr0wa+N9 >>pizza+Zx
◧◩
6. coopsm+V4[view] [source] [discussion] 2020-05-31 18:49:31
>>Nightl+l
The simplest being just scale the photo down and the facial details come back especially if the photo was high resolution to begin with.
◧◩◪◨
7. radars+u5[view] [source] [discussion] 2020-05-31 18:53:04
>>okamiu+J1
> The high frequency information is gone

diminished in power.

It's only gone if it goes below the quantization threshold. Depends on the filter.

replies(1): >>okamiu+2e2
◧◩
8. adrian+c6[view] [source] [discussion] 2020-05-31 18:57:16
>>Nightl+l
I think people are stumbling over the word "reverse" here. A common use of "reverse" is to undo. And you're 100% right that you cannot undo the destruction of information.

But instead, "reverse" is being used here to mean something like analyze or to apply countermeasures to defeat the obfuscation.

◧◩
9. fragme+x6[view] [source] [discussion] 2020-05-31 18:59:52
>>Nightl+l
Here's a very specific example, having to do with a much smaller data-set, the OCR font used for the routing and account number on cheques.

https://lifehacker.com/how-to-uncover-blurred-information-in...

replies(1): >>jcrawf+Nm
◧◩◪◨
10. thr0wa+N9[view] [source] [discussion] 2020-05-31 19:25:39
>>okamiu+J1
Blur deconvolution is not exactly a new method. Easy to find examples of reconstruction from blurred images. Eg, https://www.instantfundas.com/2012/10/how-to-unblur-out-of-f...
replies(1): >>okamiu+Zf2
◧◩
11. norriu+Va[view] [source] [discussion] 2020-05-31 19:35:07
>>Nightl+l
If you do something really simple like a Gaussian blur (which is a type of convolution), it might be possible to find the inverse convolution (de-convolution) and restore the original image with some accuracy.

One method is the Lucy-Richardson deconvolution [1], which is an iterative algorithm, and here [2] is the best practical example I could find right away. Unfortunately the text is not in English, but the illustrations and formulae might be enough to give some intuition of the process.

[1] https://en.wikipedia.org/wiki/Richardson%E2%80%93Lucy_deconv...

[2] https://habr.com/en/post/136853/

replies(2): >>buzzie+Jd >>barbeg+Jm1
◧◩◪
12. buzzie+Jd[view] [source] [discussion] 2020-05-31 19:56:24
>>norriu+Va
https://github.com/Y-Vladimir/SmartDeblur

http://smartdeblur.net/

replies(1): >>norriu+xP
◧◩◪
13. pbhjpb+oe[view] [source] [discussion] 2020-05-31 20:01:55
>>chriss+x1
If you blur then mosaic, or vice-versa, then presumably you get rid of the low and high frequencies? Depending on the detail shown in the original image either, or both, might remove enough information to render the image anonymised.

How about replace each face with a "this is not a person" AI generated face, then blur+mosaic. Or just a non-person face using a deepfake system that matches the facial expression?

replies(2): >>chriss+vj >>wool_g+jf2
◧◩
14. dahart+qe[view] [source] [discussion] 2020-05-31 20:02:15
>>Nightl+l
The data isn’t usually gone, just spread out!

Deconvolution was used to fix the Hubble Space Telescope.

https://en.wikipedia.org/wiki/Hubble_Space_Telescope#Flawed_...

Even more impressive, you can see around corners with similar reconstruction techniques

https://graphics.stanford.edu/papers/dual_photography/

https://www.quantamagazine.org/the-new-science-of-seeing-aro...

15. samsta+ef[view] [source] 2020-05-31 20:08:15
>>Ansil8+(OP)
So based n all the replies, can someone design the most resilient facemask that can never be understood by facial AIs?
replies(3): >>thephy+ag >>litera+ix1 >>herewu+zk2
◧◩
16. sly010+Sf[view] [source] [discussion] 2020-05-31 20:14:01
>>Nightl+l
https://www.youtube.com/watch?v=Vxq9yj2pVWk

Sorry.

◧◩
17. thephy+ag[view] [source] [discussion] 2020-05-31 20:16:31
>>samsta+ef
https://cvdazzle.com/

But remember that facial recognition is far from the only way to identify protesters. Assume that the full power of the DHS is there (drones, Stingrays / IMSI catchers, license plate readers)

◧◩◪◨
18. chriss+vj[view] [source] [discussion] 2020-05-31 20:42:08
>>pbhjpb+oe
Why do all these complicated things?

Just draw a black box over faces.

replies(1): >>tgsovl+1P
19. _bxg1+Em[view] [source] 2020-05-31 21:07:36
>>Ansil8+(OP)
A replacement for blur could just be black boxes. Seems easy and safe enough.
replies(3): >>Polyla+ZE >>jazzyj+ya1 >>tornat+YL6
◧◩◪
20. jcrawf+Nm[view] [source] [discussion] 2020-05-31 21:08:31
>>fragme+x6
Very minor but interesting nitpick: the font used on checks is not OCR (optical) but MICR (magnetic ink). The design objectives are different and different font families exist for the two purposes. MICR as used on checks (more properly called E-13B) bears unusual, distinctive character shapes emphasizing abnormally wide horizontal components due to the need for each character to have a distinctive waveform when read as density from left to right, essentially by a tape recorder read head. Fonts optimized for OCR are usually more normal looking to humans because they emphasize clear detection of lines instead.

E-13B is a bit of an ideal use case for this method because of the highly constrained character set used on checks and the unusually nonuniform density of E-13B. The same thing can be done on text more generally but gets significantly more difficult.

◧◩◪◨
21. pizza+Zx[view] [source] [discussion] 2020-05-31 22:27:24
>>okamiu+J1
I mean, if you have a prior probabilistic model for what a face looks like, you could combine that with standard deconvolution and get a scary good reconstruction I imagine
replies(1): >>okamiu+wt2
22. derhue+Pz[view] [source] 2020-05-31 22:38:32
>>Ansil8+(OP)
I opened https://github.com/everestpipkin/image-scrubber/issues/5 to discuss how to encourage people to use this tool responsibly. Please contribute your knowledge!
replies(1): >>nkbrd+BF
◧◩
23. Polyla+ZE[view] [source] [discussion] 2020-05-31 23:14:45
>>_bxg1+Em
Make sure they are 100% opacity. A lot of people mess this up and use 90% opacity or similar and the original image can be revealed by messing with the color levels.
replies(1): >>t-writ+fM
◧◩
24. nkbrd+BF[view] [source] [discussion] 2020-05-31 23:18:53
>>derhue+Pz
Do you really think that altered in any way images without any exif can hold someone accountable? Even videos can not be trusted nowadays.
replies(4): >>Camero+6H >>heavys+9U >>bigiai+x11 >>bigiai+021
◧◩◪
25. Camero+6H[view] [source] [discussion] 2020-05-31 23:29:54
>>nkbrd+BF
AIUI this is mainly to be used by news publications, which need to continuously build their credibility regardless, rather than as evidence in a courtroom.
◧◩◪
26. t-writ+fM[view] [source] [discussion] 2020-06-01 00:13:25
>>Polyla+ZE
I've doxxed my Reddit username on my Apple phone doing that exact thing. The black marker is not opaque, even after a few stripes over the username. You have to do it many more times.
replies(1): >>aspenm+zP
◧◩◪◨⬒
27. tgsovl+1P[view] [source] [discussion] 2020-06-01 00:41:18
>>chriss+vj
Because the result is a lot more ugly.

Don't tell people what not to do. Figure out why they're doing it, and provide what they actually want while still achieving the goals (here: security).

Very coarse mosaic, add noise, then blur seems reasonably safe, and doesn't have to look like crap.

replies(1): >>Hello7+XS
◧◩◪◨
28. norriu+xP[view] [source] [discussion] 2020-06-01 00:47:13
>>buzzie+Jd
Yes, that's it, thank you! And here's the English version of the article I linked above: https://yuzhikov.com/articles/BlurredImagesRestoration1.htm
◧◩◪◨
29. aspenm+zP[view] [source] [discussion] 2020-06-01 00:47:30
>>t-writ+fM
Easier to select an area and delete it from the layer entirely so that a transparent hole is left. Then make sure you cleanup EXIF and other metadata or you may have the original image still in a thumbnail field at reduced fidelity.

Free online metadata viewer http://exif.regex.info

Powered by FOSS (Perl-based) https://exiftool.org

replies(1): >>girst+UW
◧◩◪◨⬒⬓
30. Hello7+XS[view] [source] [discussion] 2020-06-01 01:28:09
>>tgsovl+1P
"seems reasonably safe" seems like a terrible cryptographic analysis. in fact, given that we already know that both blurring and mosaicing are individually reversible, and noise is easily removable from a sufficiently wide mosaic, this seems like a particularly terrible algorithm. that's not the point though: any man can create an encryption algorithm that he himself cannot break. maybe you can come up with an obfuscation algorithm that cannot be trivially broken, but that doesn't mean it's even remotely a good idea.
replies(1): >>tgsovl+iL3
◧◩◪
31. heavys+9U[view] [source] [discussion] 2020-06-01 01:44:40
>>nkbrd+BF
Yes. There is a very low bar to pass to convince a jury that a defendent is guilty. 99% conviction rates should scare you.
◧◩◪◨⬒
32. girst+UW[view] [source] [discussion] 2020-06-01 02:25:40
>>aspenm+zP
Do not keep it transparent! The Gimp for example keeps the underlying colour data, and just sets the opacity to 0.

this bug (closed as Expected Behavior) has a demonstration: https://gitlab.gnome.org/GNOME/gimp/-/issues/4487

replies(3): >>_bxg1+K21 >>Persei+zd1 >>aspenm+TS1
◧◩◪
33. bigiai+x11[view] [source] [discussion] 2020-06-01 03:33:20
>>nkbrd+BF
We know leo uses parallel construction.

Even images without chain of evidence reliability might get you on "a list"...

Do you reckon cops invited to "run wild" with clearview ai _AREN'T_ gonna be running protester photos through it to see who to "profile"?

https://www.buzzfeednews.com/article/ryanmac/clearview-ai-co...

◧◩◪
34. bigiai+021[view] [source] [discussion] 2020-06-01 03:39:21
>>nkbrd+BF
Even being fairly familiar with deepfakes, I would very much prefer to avoid having to defend myself in court against prosecutors claiming to have "video evidence" of me murdering someone...
◧◩◪◨⬒⬓
35. _bxg1+K21[view] [source] [discussion] 2020-06-01 03:55:19
>>girst+UW
It blows my mind that there are so many ways to screw up something this simple.

Edit: to be clear I meant this as a commentary on the technology, not the people making the mistakes

replies(1): >>Polyla+Fb1
◧◩
36. jazzyj+ya1[view] [source] [discussion] 2020-06-01 06:11:41
>>_bxg1+Em
As soon as deepfakes and "thispersondoesnotexist" started happening I wanted a tool that would replace everyone's face with a auto-generated face just so I could do street photography without feeling like I was invading people's right to obscurity
replies(4): >>kicksc+bb1 >>shavin+pw1 >>livq+zZ1 >>ikeyan+QS2
◧◩◪
37. kicksc+bb1[view] [source] [discussion] 2020-06-01 06:25:18
>>jazzyj+ya1
This is a cool idea. Oh to be a streamer with an auto generated face. Reminds me of A Scanner Darkly.
replies(1): >>m_eima+sr1
◧◩◪◨⬒⬓⬔
38. Polyla+Fb1[view] [source] [discussion] 2020-06-01 06:35:24
>>_bxg1+K21
I think there is a need for a dedicated image privacy offline program. On a technical level its very easy to preserve privacy, its just the tools people are using were built for other purposes (Non destructive editing is highly desirable in normal cases).

All the program has to do is scrub all exif data, have a censor box/brush that is 100% black and rencode the image so there is no remaining unneeded data.

replies(1): >>wool_g+Wd2
◧◩◪◨⬒⬓
39. Persei+zd1[view] [source] [discussion] 2020-06-01 07:09:05
>>girst+UW
Is PPM a safe round-trip format to remove all metadata and transparency? I'd like to recommend it to a friend and as far as I know it really only contains RGB as text and has no extensions for exif or similar. But after so many gotchas, as listed here in the thread, I'm somewhat paranoid...
replies(1): >>jstanl+ew1
◧◩◪
40. steera+bg1[view] [source] [discussion] 2020-06-01 07:46:34
>>chriss+x1
Nit: when you blur high frequencies are not gone. They are damped.
◧◩◪
41. barbeg+Jm1[view] [source] [discussion] 2020-06-01 09:06:27
>>norriu+Va
Yes this is possible before JPEG compression, because convolution removes fairly little information but once you compress using JPEG you remove the frequency components that make it reversible.
◧◩◪◨
42. m_eima+sr1[view] [source] [discussion] 2020-06-01 10:01:41
>>kicksc+bb1
Should be slowly morphing between different faces, maybe 10 minutes or so per transition. Should be fairly unsettling to watch! :D
◧◩◪◨⬒⬓⬔
43. jstanl+ew1[view] [source] [discussion] 2020-06-01 10:49:43
>>Persei+zd1
ASCII PPM supports comments, so it is possible that EXIF or other identifying information would get written into the comments by some tool.

I have only ever observed PPM comments right at the start of the file, so you could open it in a text editor and remove the comments from the start. Maybe check the very end of the file as well.

Binary PPM does not support comments, so that would be a better solution. PPM documentation here, you want possibly P3 or more likely P6 https://en.wikipedia.org/wiki/Netpbm#File_formats

◧◩◪
44. shavin+pw1[view] [source] [discussion] 2020-06-01 10:52:22
>>jazzyj+ya1
That's a really interesting idea. I'm not sure what the commercial value would be, but the artistic value (and gain in privacy) would be huge. I'm not sure what you'd do about identifying marks like tattoos, but perhaps that isn't the biggest concern when compared to faces.

Could you train a model with your own face as a start, and then run your photos through an existing consumer face-swap app? Or perhaps use a celebrities likeness? I wonder how much the visual 'likeness' of a stranger is worth.

replies(1): >>jb1533+7e2
◧◩
45. litera+ix1[view] [source] [discussion] 2020-06-01 11:02:18
>>samsta+ef
Best bet is probably just using a normal looking facemask that is highly reflective in the spectrum the camera sees.
◧◩◪◨⬒⬓
46. aspenm+TS1[view] [source] [discussion] 2020-06-01 14:07:03
>>girst+UW
I didn’t specify a program to use, but I did not know this. A step in my personal workflow I neglected to state is to flatten all layers but I’m not sure what the best way is, so I’ll just say I am open to ideas for better ways.

There should be a test suite for image editing applications which will validate the different ways of editing a file to see which ones work as expected and which do not. I’m thinking something similar to web standards test for browsers. Does something like this already exist?

◧◩◪
47. livq+zZ1[view] [source] [discussion] 2020-06-01 14:46:06
>>jazzyj+ya1
I've always been partial to the laughing man from Ghost in the Shell. This would be a perfect use case :P
◧◩◪◨⬒⬓⬔⧯
48. wool_g+Wd2[view] [source] [discussion] 2020-06-01 15:55:11
>>Polyla+Fb1
Good thought; it seems like something the EFF (or maybe the ACLU) would be interested in producing or publishing.
◧◩◪◨⬒
49. okamiu+2e2[view] [source] [discussion] 2020-06-01 15:56:15
>>radars+u5
True. I think the reasonable assumption would be a low-pass filter that removes high frequencies altogether. A gaussian filter wouldn't be a particularly good idea.
◧◩◪◨
50. jb1533+7e2[view] [source] [discussion] 2020-06-01 15:56:36
>>shavin+pw1
Commercial value may be for filmmakers who would no longer have to worry about getting waivers from people in the background of live shots. (Not a lawyer.)
replies(1): >>thr0w_+OK2
◧◩◪◨
51. wool_g+jf2[view] [source] [discussion] 2020-06-01 16:03:18
>>pbhjpb+oe
I would be worried that a generated fake face would be similar enough to the face of a real someone, somewhere, to get that person in trouble. This isn't a crisp portrait photo; a blurry cell phone video with a lot of activity and noise already kind of leaves an opening for mis-identification.
◧◩◪◨⬒
52. okamiu+Zf2[view] [source] [discussion] 2020-06-01 16:06:58
>>thr0wa+N9
I don't when de-blurring would be a novel idea. I think newer methods that use machine learning can produce very good results. But the math of it is much older than any computer implementation.

If you remove high frequency details, you in effect remove distinguishing features. That it is possible to create an absolutely convincing high-detail image that if blurred, gives the same "original" blurred image doesn't mean you have the correct deblurred image.

With not too fancy methods, I'm pretty sure you can make a blurred image identify as any multiple people.

I don't think this is a controversial statement either. In any case, this is a tangential discussion, since blurring to hide identities is a flawed method to begin with. With video recording, tracking, grouped individuals, etc, I'm sure reconstruction with good databases of likely subjects can have some surprising accuracy. So, better to avoid it altogether.

That said, one image, sufficiently blurred with a proper low-pass filter (i.e not a softer gaussian type, but one that just removes frequency ranges altogether), will absolutely not contain information to identify someone. The information literally isn't there. A large number of people are an equally good match, and then no one is. But, since combined with other methods I mentioned, it's a bad idea, then, yes, it's a bad idea.

◧◩
53. herewu+zk2[view] [source] [discussion] 2020-06-01 16:26:29
>>samsta+ef
Once a face is captured that image data can be retained forever while the image recognition continually improves and then repeatedly reprocessed for matches. Best not to show your face at all if you don't want to be held accountable.
◧◩◪◨⬒
54. okamiu+wt2[view] [source] [discussion] 2020-06-01 17:07:07
>>pizza+Zx
You can get a scaryly real like looking high detailed image that blurs to something really close to the original blurred image. Yet, it won't look like the original image, and won't identify the person.
◧◩◪◨⬒
55. thr0w_+OK2[view] [source] [discussion] 2020-06-01 18:27:30
>>jb1533+7e2
Also not a lawyer, and US-based in case it varies by country.

Do you know if a waiver is needed in this case? My understanding is that I can walk down a sidewalk, around Disneyland, around a resort, and film anyone / anything in plain sight. (I don't do that, by the way...) In other words, assuming you're not climbing over railings etc., if you can see it with your eyes, you can film it or photograph it.

Wonder if anyone here (plenty of legal eagles I'm sure) can confirm this or correct this. We don't need to get bogged down in corner cases & rare exceptions... for example, I think I heard that in some states, if the police ask (demand?) that you stop recording, you have to, otherwise you're in violation of the law... but even as I type that, as an American, it just sounds wrong... but I don't know.

replies(1): >>mikepu+PX2
◧◩◪
56. ikeyan+QS2[view] [source] [discussion] 2020-06-01 19:09:23
>>jazzyj+ya1
A tool that automatically turns background faces into the faces of random animals would actually do quite well.
◧◩◪◨⬒⬓
57. mikepu+PX2[view] [source] [discussion] 2020-06-01 19:34:15
>>thr0w_+OK2
Also not a lawyer, but I think it mostly has to do with commercial use. Filming people at Disney for your Instagram followers is different from making a feature film and turning everyone standing around on a busy street into uncredited extras.

This particular site is with respect to Canada, but I'm pretty sure the same basic idea applies everywhere:

"When publishing photos for commercial purposes: You need the permission of every identifiable model in the photo, even if the photo was taken in a public space. For example, if a photo has 10 identifiable models in the photo, you would require a model release for each of them."

https://www.lawdepot.ca/law-library/faq/model-and-entertainm...

◧◩◪◨⬒⬓⬔
58. tgsovl+iL3[view] [source] [discussion] 2020-06-02 00:31:27
>>Hello7+XS
I was writing a HN comment, not a scientific paper, which is why I wrote "seems safe" instead of making stronger claims.

I'd also like to know how mosaicing is reversible, since it demonstrably reduces the total available amount of information from e.g. 20x20 = 400 RGB values to a single RGB value. This is not sufficient for text where you can start brute-forcing individual options because the search space is small and inputs can be reconstructed precisely, but I'd like to see an explanation why you think this is reversible for photos (even without noise added). I'd also like to know how you want to remove random noise applied to each mosaic block.

The mosaicing is supposed to be the security step here. The blur is optional eye candy not expected to remove further information.

In particular, if you claim that a face mosaiced with a large "pixel" size (e.g. so that the typical face is 5x5 "mosaic blocks" big), you're effectively claiming that you can perform facial recognition based on noisy 5x5 pixel images.

replies(1): >>Hello7+627
◧◩
59. tornat+YL6[view] [source] [discussion] 2020-06-02 22:43:08
>>_bxg1+Em
Even better, replace the face with Kim-Jong Un and then blur it. When they apply those forensic techniques they'll discover that it was him all along!
◧◩◪◨⬒⬓⬔⧯
60. Hello7+627[view] [source] [discussion] 2020-06-03 00:42:33
>>tgsovl+iL3
according to https://www.bitsoffreedom.nl/2019/12/12/amazons-rekognition-..., current facial recognition software is able to distinguish faces in very blurry mosaics using statistics.

it doesn't matter though. as I've explained, it's far easier to come up with flawed schemes than prove them insecure. just because I can't explain why your specific scheme is insecure doesn't mean it stands a chance against real cryptographers.

replies(1): >>tgsovl+JE7
◧◩◪◨⬒⬓⬔⧯▣
61. tgsovl+JE7[view] [source] [discussion] 2020-06-03 07:08:21
>>Hello7+627
The 20x26 example is indeed scary, but in line with what was known about facial recognition. (It also becomes a bit less scary when you don't look at a zoomed-in version of the image.)

Hence my suggestion to reduce a face to something like 5x5 blocks.

While I'm familiar with the crypto design problem, this is not a crypto algorithm. Sure, it can't be ruled out that someone in the future will find a way to do it, but the state of the art says that 5x5 pixels are not anywhere near enough to run face recognition.

And a solution that may be broken in the future is often much better than a solution that people don't use because it doesn't meet their needs, which in this case is not having fugly black boxes in their picture.

replies(1): >>Hello7+Jja
◧◩◪◨⬒⬓⬔⧯▣▦
62. Hello7+Jja[view] [source] [discussion] 2020-06-04 00:44:48
>>tgsovl+JE7
this seems like a false dichotomy. there's nothing stopping you from using a pink oval, as long as it covers the face.
[go to top]