If someone finds a way to reverse a hash, I'd also argue that hashing has now become a form of compression.
I think in 5 billion images there are more than enough common image areas to allow for average compression to become lower than a single byte. This is a lossy process, it does not need a complete copy of the source data, similar to how an MP3 doesn't contain most of the audio data fed into it.
I think the argument that SD revolves around lossless compression is quite an interesting one, even if the original code authors didn't realise that's what they were doing. It's the first good technical argument I've heard, at least.
All of those could've been prevented if the model was trained on public domain images instead of random people's copyrighted work. Even if this lawsuit succeeds, I don't think image generation algorithms will be banned. Some AI companies will just have spent a shitton of cash failing to get away with copyright violation, but the technology can still work for art that's either unlicensed or licensed in such a way that AI models can be trained based on it.
Many state-of-the-art compression algorithms are in fact based on generative models. But the thing is, the model weights themselves are not the compressed representation.
The trained model is the compression algorithm (or more technically, a component of it... as it needs to be combined with some kind of entropy coding).
You could use Stable Diffusion to compress and store the training data if you wanted, but nobody is doing that.