Yes.
This is exactly how LLMs work. For a given input, an LLM will output a non-deterministic response that approximates its training data.
LLMs aren’t intelligent. And it isn’t that they don’t learn, they literally cannot learn from their experience in real time.