It is possible the model calculates an approximate board state, which is different from the board state but equivalent for most games, but not all games. It would be interesting to train adversarial policy to check this. From KataGo attack we know this does happen for Go AIs: Go rules have a concept of liberty, but so called pseudoliberty is easier to calculate and equivalent for most cases (but not all cases). In fact, human programmers also used pseudoliberty to optimize their engines. Adversarial attack found Go AIs also use pseudoliberty.
But the model could in principle just have learned a long list of rote heuristics that happen to predict notation strings well, without having made the inferential leap to a much simpler set of rules, and a learner weaker than a LLM could well have got stuck at that stage.
I wonder how well a human (or a group of humans) would fare at the same task and if they could also successfully reconstruct chess even if they had no prior knowledge of chess rules or notation.
(OTOH a GPT3+ level LLM certainly does know that chess notation is related to something called "chess", which is a "game" and has certain "rules", but to what extent is it able to actually utilize that information?)
Yes - this is exactly what the probes show.
One interesting aspect is that it still learns to play when trained on blocks of move sequences starting from the MIDDLE of the game, so it seems it must be incrementally inferring the board state by what's being played rather than just by tracking the moves.