zlacker

[parent] [thread] 1 comments
1. Fillig+(OP)[view] [source] 2023-01-14 14:30:12
Compression down to two bytes per image?

You run into the pigeonhole argument. That level of compression can only work if there are less than seventy thousand different images in existence, total.

Certainly there’s a deep theoretical equivalent between intelligence and compression, but this scenario isn’t what anyone means by “compression” normally.

replies(1): >>Xelyne+Ui
2. Xelyne+Ui[view] [source] 2023-01-14 16:58:18
>>Fillig+(OP)
When gzip turns my 10k character ASCII text file into a a 2kb archive, has it "compressed each character down to a fifth of a byte per character"? No, thats a misunderstanding of compression.

Just like gzip, training stable diffusion certainly removes a lot of data, but without understanding the effect of that transformation of the entropy of the data it's meaningless to say thing like "two bytes per image" because(like gzip) you need the whole encoded dataset to recover the image.

It's compressing many images into 10GB of data, not a single image into two bytes. This is directly analogous to what people usually mean by "compression"

[go to top]