>>dymk+(OP)
You're oversimplifying imagination. It
could be related to something they've seen before, or it could
not be. It could be entirely invented and novel in a way that has no antecedent to senses. Nor is it mere randomness added in. Imagining is something an agent
does and is capable of. The fly in the ointment is still that ML models simply do not have agency in a fundamental way; they are programmed and they're are limited by that programming, that's what makes them and computers so effective as tools: they do
exactly as they are programmed, which can't be said for humans.
We, as humans, might find the output imaginative or novel or even surprising, but the ML model hasn't done anything more than follow through on its programming. The ML programmer simply didn't expect (or can't explain the programming) the output and is anthropomorphizing their own creation as a means of explanation.