1) The code, for now, runs locally. This is good. To avoid the possibility of the code being tampered with at a later day (for example, it could be modified to send copies of the image to a server), download the webpage and use the saved copy, not the live copy.
2) Do not use the blur functionality. For maximum privacy, this should be removed from the app entirely. There are _a lot_ of forensic methods to reverse blur techniques.
3) Be weary of other things in the photograph that might identify someone: reflections, shadows, so on.
4) Really a subset of 2 and 3, but be aware that blocking out faces is often times not sufficient to anonymise the subject in the photo. Identifying marks like tattoos, or even something as basic as the shoes they are wearing, can be used to identify the target.
Any examples? You can't reverse it if the data is gone.
That's the problem - the data you think is gone isn't gone. High frequencies are gone.... but you left all the low frequencies, didn't you? You can read a face from the low frequencies.
Then again, maybe groups of people can be associated together, and a poor match is good enough given other clues.
So, much better to be safe than sorry.
I'm not sure if I had a particular good point to make, other than that blurring does remove information that cannot easily be reversed. You can probably make very convincing reconstructions, but they might not look like the original person.
diminished in power.
It's only gone if it goes below the quantization threshold. Depends on the filter.
But instead, "reverse" is being used here to mean something like analyze or to apply countermeasures to defeat the obfuscation.
https://lifehacker.com/how-to-uncover-blurred-information-in...
One method is the Lucy-Richardson deconvolution [1], which is an iterative algorithm, and here [2] is the best practical example I could find right away. Unfortunately the text is not in English, but the illustrations and formulae might be enough to give some intuition of the process.
[1] https://en.wikipedia.org/wiki/Richardson%E2%80%93Lucy_deconv...
How about replace each face with a "this is not a person" AI generated face, then blur+mosaic. Or just a non-person face using a deepfake system that matches the facial expression?
Deconvolution was used to fix the Hubble Space Telescope.
https://en.wikipedia.org/wiki/Hubble_Space_Telescope#Flawed_...
Even more impressive, you can see around corners with similar reconstruction techniques
https://graphics.stanford.edu/papers/dual_photography/
https://www.quantamagazine.org/the-new-science-of-seeing-aro...
But remember that facial recognition is far from the only way to identify protesters. Assume that the full power of the DHS is there (drones, Stingrays / IMSI catchers, license plate readers)
E-13B is a bit of an ideal use case for this method because of the highly constrained character set used on checks and the unusually nonuniform density of E-13B. The same thing can be done on text more generally but gets significantly more difficult.
Don't tell people what not to do. Figure out why they're doing it, and provide what they actually want while still achieving the goals (here: security).
Very coarse mosaic, add noise, then blur seems reasonably safe, and doesn't have to look like crap.
Free online metadata viewer http://exif.regex.info
Powered by FOSS (Perl-based) https://exiftool.org
this bug (closed as Expected Behavior) has a demonstration: https://gitlab.gnome.org/GNOME/gimp/-/issues/4487
Even images without chain of evidence reliability might get you on "a list"...
Do you reckon cops invited to "run wild" with clearview ai _AREN'T_ gonna be running protester photos through it to see who to "profile"?
https://www.buzzfeednews.com/article/ryanmac/clearview-ai-co...
Edit: to be clear I meant this as a commentary on the technology, not the people making the mistakes
All the program has to do is scrub all exif data, have a censor box/brush that is 100% black and rencode the image so there is no remaining unneeded data.
I have only ever observed PPM comments right at the start of the file, so you could open it in a text editor and remove the comments from the start. Maybe check the very end of the file as well.
Binary PPM does not support comments, so that would be a better solution. PPM documentation here, you want possibly P3 or more likely P6 https://en.wikipedia.org/wiki/Netpbm#File_formats
Could you train a model with your own face as a start, and then run your photos through an existing consumer face-swap app? Or perhaps use a celebrities likeness? I wonder how much the visual 'likeness' of a stranger is worth.
There should be a test suite for image editing applications which will validate the different ways of editing a file to see which ones work as expected and which do not. I’m thinking something similar to web standards test for browsers. Does something like this already exist?
If you remove high frequency details, you in effect remove distinguishing features. That it is possible to create an absolutely convincing high-detail image that if blurred, gives the same "original" blurred image doesn't mean you have the correct deblurred image.
With not too fancy methods, I'm pretty sure you can make a blurred image identify as any multiple people.
I don't think this is a controversial statement either. In any case, this is a tangential discussion, since blurring to hide identities is a flawed method to begin with. With video recording, tracking, grouped individuals, etc, I'm sure reconstruction with good databases of likely subjects can have some surprising accuracy. So, better to avoid it altogether.
That said, one image, sufficiently blurred with a proper low-pass filter (i.e not a softer gaussian type, but one that just removes frequency ranges altogether), will absolutely not contain information to identify someone. The information literally isn't there. A large number of people are an equally good match, and then no one is. But, since combined with other methods I mentioned, it's a bad idea, then, yes, it's a bad idea.
Do you know if a waiver is needed in this case? My understanding is that I can walk down a sidewalk, around Disneyland, around a resort, and film anyone / anything in plain sight. (I don't do that, by the way...) In other words, assuming you're not climbing over railings etc., if you can see it with your eyes, you can film it or photograph it.
Wonder if anyone here (plenty of legal eagles I'm sure) can confirm this or correct this. We don't need to get bogged down in corner cases & rare exceptions... for example, I think I heard that in some states, if the police ask (demand?) that you stop recording, you have to, otherwise you're in violation of the law... but even as I type that, as an American, it just sounds wrong... but I don't know.
This particular site is with respect to Canada, but I'm pretty sure the same basic idea applies everywhere:
"When publishing photos for commercial purposes: You need the permission of every identifiable model in the photo, even if the photo was taken in a public space. For example, if a photo has 10 identifiable models in the photo, you would require a model release for each of them."
https://www.lawdepot.ca/law-library/faq/model-and-entertainm...
I'd also like to know how mosaicing is reversible, since it demonstrably reduces the total available amount of information from e.g. 20x20 = 400 RGB values to a single RGB value. This is not sufficient for text where you can start brute-forcing individual options because the search space is small and inputs can be reconstructed precisely, but I'd like to see an explanation why you think this is reversible for photos (even without noise added). I'd also like to know how you want to remove random noise applied to each mosaic block.
The mosaicing is supposed to be the security step here. The blur is optional eye candy not expected to remove further information.
In particular, if you claim that a face mosaiced with a large "pixel" size (e.g. so that the typical face is 5x5 "mosaic blocks" big), you're effectively claiming that you can perform facial recognition based on noisy 5x5 pixel images.
it doesn't matter though. as I've explained, it's far easier to come up with flawed schemes than prove them insecure. just because I can't explain why your specific scheme is insecure doesn't mean it stands a chance against real cryptographers.
Hence my suggestion to reduce a face to something like 5x5 blocks.
While I'm familiar with the crypto design problem, this is not a crypto algorithm. Sure, it can't be ruled out that someone in the future will find a way to do it, but the state of the art says that 5x5 pixels are not anywhere near enough to run face recognition.
And a solution that may be broken in the future is often much better than a solution that people don't use because it doesn't meet their needs, which in this case is not having fugly black boxes in their picture.