Any examples? You can't reverse it if the data is gone.
That's the problem - the data you think is gone isn't gone. High frequencies are gone.... but you left all the low frequencies, didn't you? You can read a face from the low frequencies.
Then again, maybe groups of people can be associated together, and a poor match is good enough given other clues.
So, much better to be safe than sorry.
I'm not sure if I had a particular good point to make, other than that blurring does remove information that cannot easily be reversed. You can probably make very convincing reconstructions, but they might not look like the original person.
diminished in power.
It's only gone if it goes below the quantization threshold. Depends on the filter.
But instead, "reverse" is being used here to mean something like analyze or to apply countermeasures to defeat the obfuscation.
https://lifehacker.com/how-to-uncover-blurred-information-in...
One method is the Lucy-Richardson deconvolution [1], which is an iterative algorithm, and here [2] is the best practical example I could find right away. Unfortunately the text is not in English, but the illustrations and formulae might be enough to give some intuition of the process.
[1] https://en.wikipedia.org/wiki/Richardson%E2%80%93Lucy_deconv...
How about replace each face with a "this is not a person" AI generated face, then blur+mosaic. Or just a non-person face using a deepfake system that matches the facial expression?
Deconvolution was used to fix the Hubble Space Telescope.
https://en.wikipedia.org/wiki/Hubble_Space_Telescope#Flawed_...
Even more impressive, you can see around corners with similar reconstruction techniques
https://graphics.stanford.edu/papers/dual_photography/
https://www.quantamagazine.org/the-new-science-of-seeing-aro...
E-13B is a bit of an ideal use case for this method because of the highly constrained character set used on checks and the unusually nonuniform density of E-13B. The same thing can be done on text more generally but gets significantly more difficult.
Don't tell people what not to do. Figure out why they're doing it, and provide what they actually want while still achieving the goals (here: security).
Very coarse mosaic, add noise, then blur seems reasonably safe, and doesn't have to look like crap.
If you remove high frequency details, you in effect remove distinguishing features. That it is possible to create an absolutely convincing high-detail image that if blurred, gives the same "original" blurred image doesn't mean you have the correct deblurred image.
With not too fancy methods, I'm pretty sure you can make a blurred image identify as any multiple people.
I don't think this is a controversial statement either. In any case, this is a tangential discussion, since blurring to hide identities is a flawed method to begin with. With video recording, tracking, grouped individuals, etc, I'm sure reconstruction with good databases of likely subjects can have some surprising accuracy. So, better to avoid it altogether.
That said, one image, sufficiently blurred with a proper low-pass filter (i.e not a softer gaussian type, but one that just removes frequency ranges altogether), will absolutely not contain information to identify someone. The information literally isn't there. A large number of people are an equally good match, and then no one is. But, since combined with other methods I mentioned, it's a bad idea, then, yes, it's a bad idea.
I'd also like to know how mosaicing is reversible, since it demonstrably reduces the total available amount of information from e.g. 20x20 = 400 RGB values to a single RGB value. This is not sufficient for text where you can start brute-forcing individual options because the search space is small and inputs can be reconstructed precisely, but I'd like to see an explanation why you think this is reversible for photos (even without noise added). I'd also like to know how you want to remove random noise applied to each mosaic block.
The mosaicing is supposed to be the security step here. The blur is optional eye candy not expected to remove further information.
In particular, if you claim that a face mosaiced with a large "pixel" size (e.g. so that the typical face is 5x5 "mosaic blocks" big), you're effectively claiming that you can perform facial recognition based on noisy 5x5 pixel images.
it doesn't matter though. as I've explained, it's far easier to come up with flawed schemes than prove them insecure. just because I can't explain why your specific scheme is insecure doesn't mean it stands a chance against real cryptographers.
Hence my suggestion to reduce a face to something like 5x5 blocks.
While I'm familiar with the crypto design problem, this is not a crypto algorithm. Sure, it can't be ruled out that someone in the future will find a way to do it, but the state of the art says that 5x5 pixels are not anywhere near enough to run face recognition.
And a solution that may be broken in the future is often much better than a solution that people don't use because it doesn't meet their needs, which in this case is not having fugly black boxes in their picture.