Not interpolation, no. It is more like the N-gram autocomplete used to use to make typing and autocorrect suggestions in your phone. Attention js not N-gram, but you can kinda think of it as being a sparsely compressed N-gram where N=256k or whatever the context window size is. It’s not technically accurate, but it will get your intuition closer than thinking of it as interpolation.
The LLM uses attention and some other tricks (attention, it turns out, is not all you need) to build a probabilistic model of what the next token will be, which it then sampled. This is much more powerful than interpolation.