My hunch is that they aren't tailored toward ridiculous images exactly, but if they demonstrated "a woman sitting in a chair reading", it would be really hard to tell if the result was a small modification of an image in the training data. If they demonstrate "A snake made out of corn", I have less concern about the model having a very close training example.