zlacker

[parent] [thread] 0 comments
1. therea+(OP)[view] [source] 2020-05-31 16:20:03
I guess that is more about hiding the identify of the photographer instead of the people pictured, but since browser fingerprinting works by combining seemingly innocuous data to identify individuals, that suggests that removing everything is probably safest. The camera model, software versions and other explicit labels are one obvious thing. The ISO, focal length, and shutter speed could narrow down what sensor was used and maybe what software, especially if you had multiple images from the same camera. Maybe manual settings changes that the photographer made could show up in the same data. Possibly small amounts of clock drift? If you saw the news story that combined multiple security cameras' footage with phone video around the time of George Floyd's arrest, clock disagreement was something they saw, and the events in the images themselves allowed them to figure out the clock difference (which was huge there, like 20 minutes, but you could still do for a smaller drift.)

Other things are external to the EXIF but could be combined with it. The sequence numbers in the filenames are the most obvious signal. The precise number of megapixels of the image might also tell you what sensor was used -- so maybe an anonymizer should resample the image to a new size.

I guess these seem unlikely to be investigated, but then again nobody initially thought that telling every web server what fonts you have installed on your machine would be used against you, or that the existence of "Do Not Track" would make browsers easier to track. It just depends on how much it's worth to someone to write this stuff once -- then it's free for all future uses.

I just looked at https://en.wikipedia.org/wiki/Exif and there are lots of interesting possibilities.

[go to top]