Not what I would have expected from a 'one-shot'. Maybe self-supervised would be a more suitable term?
See also, "zero-shot" / "few-shot" etc.
It doesn't do it in one-shot on the GPU either. It feeds outputs back into inputs over and over. By the time you see tokens as an end-user, the clanker has already made a bunch of iterations.
1. Getting an LLM to do something based on a single example
2. Getting an LLM to achieve a goal from a single prompt with no follow-ups
I think both are equally valid.
We're essentially trying to map 'traditional' ML terminology to LLMs, it's natural that it'll take some time to get settled. I just thought that one-shot isn't an ideal name for something that might go off into an arbitrarily long loop.