It doesn't mean that. You could "find" Mickey in the latent space of any model using textual inversion and an hour of GPU time. He's just a few shapes.
(Main example: the most popular artist StableDiffusion 1 users like to imitate is not in the StableDiffusion training images. His name just happens to work in prompts by coincidence.)
A latent space that contains every image contains every copyrighted image. But the concept of sRGB is not copyrighted by Disney just yet.
"Mickey" does work as a prompt, but if they took that word out of the text encoder he'd still be there in the latent space, and it's not hard to find a way to construct him out of a few circles and a pair of red shorts.
In any case, in the example images here, the AI clearly knew who Mickey is and used that to generate Mickey Mouse images. Mickey has got to be in the training data.
Of course that probably means that those copyrighted images exist in some encoded form in the data or neural network of the AI, and also in our brain. Is that legal? With humans it's unavoidable, but that doesn't have to mean that it's also legal for AI. But even if those copyrighted images exist in some form in our brains, we know not to reproduce them and pass them off as original. The AI does that. Maybe it needs a feedback mechanism to ensure its generated images don't look too much like copyrighted images from its data set. Maybe art-AI necessarily also has to become a bit of a legal-AI.
The Mickey Mouse case though is obviously bs, the training data definitely does just have tons of infringing examples of Mickey Mouse, it didn't somehow reinvent the exact image of him from first principles.