No LLM has seen enough of this language vs. python and context is now going to be mostly wordy not codey (e.g. docs, specs etc.)
Yes.
This is exactly how LLMs work. For a given input, an LLM will output a non-deterministic response that approximates its training data.
LLMs aren’t intelligent. And it isn’t that they don’t learn, they literally cannot learn from their experience in real time.