I don't really see how this is different from "LLMs can't multiply 20 digit numbers"--which btw, most humans can't either. I tried it once (using pen and paper) and consistently made errors somewhere.
Doesn't that come down to allowing it to directly regurgitate training data? Surely it's seen dozens of such solutions.