zlacker

[parent] [thread] 3 comments
1. synu+(OP)[view] [source] 2023-01-14 10:15:50
What is a 1080p MP4 video of a film if not simply a highly detailed, irreversible but guaranteed unique checksum of that original content?
replies(1): >>cf141q+pi
2. cf141q+pi[view] [source] 2023-01-14 13:24:16
>>synu+(OP)
I think this is overstretching it. That would be a checksum that can be parsed by humans and contains artistic value that serves as the basis for claims to copyright. An actual checksum no longer has artistic value in itself and cant reproduce the original work.

Which is why this is framed as compression, it implies that fundamentally SD makes copies instead of (re)creating art. Leaving out the issue of recreating forgeries of existing works, using the training data for the creation of new pieces should be well covered inside the bounds of appropriation. Demanding anything more then filtering the output of SD for 1:1 reproductions of the training data is really pushing it.

edit: Checksums arent necessarily unique btw. See "Hash collisions".

replies(1): >>synu+lp
◧◩
3. synu+lp[view] [source] [discussion] 2023-01-14 14:32:54
>>cf141q+pi
Overfitting seems like a fuzzy area here. I could train a model on one image that could consistently produce an output no human could tell apart from the original. And of course, shades of gray from there.

Regarding your edit, what are the chances of a "hash collision" where the hash is two MP4 files for two different movies? Seems wildly astronomical.. impossible even? That's why this hash method is so special, plus the built in preview feature you can use to validate your hash against the source material, even without access to the original.

replies(1): >>cf141q+9E
◧◩◪
4. cf141q+9E[view] [source] [discussion] 2023-01-14 16:29:52
>>synu+lp
Once you are down to one picture, collisions become feasible given the right environment and resolution of the image.

Pretty sure this is nitpicking about an overused analogy though.

[go to top]