As for the fact that it gets things wrong sometimes - sure, this doesn't say it actually learned every algorithm (in whichever model you may be thinking about). But the nice thing is that we now have this proof via category theory, and it allows us to both frame and understand what has occurred, and to consider how to align the systems to learn algorithms better.
What's a token?
Tokens exist because transformers don't work on bytes or words. This is because it would be too slow (bytes), the vocabulary too large (words), and some words would appear too rarely or never. The token system allows a small set of symbols to encode any input. On average you can approximate 1 token = 1 word, or 1 token = 4 chars.
So tokens are the data type of input and output, and the unit of measure for billing and context size for LLMs.
Your argument is the equivalent of saying humans can't do math because they rely on calculators.
In the end what matters is whether the problem is solved, not how it is solved.
(assuming that the how has reasonable costs)