zlacker

[return to "Chess-GPT's Internal World Model"]
1. sjducb+Z52[view] [source] 2024-01-07 14:35:45
>>homarp+(OP)
I’m curious how human like this LLM feels when you play it.

One of the challenges to making fun chess bots is to make it play like a low or mid ranked human. The problem is that a stockfish based bot knows some very strong moves, but deliberately plays bad moves so it’s about the right skill level. The problem is that these bad moves are often very obvious. For example I’ll threaten a queen capture. Any human would see it and move their queen. The bot “blunders” and loses the queen to an obvious attack. It feels like the bot is letting you win which kills the enjoyment of playing with the bot.

I think that this approach would create very human like games.

◧◩
2. Solven+gf2[view] [source] 2024-01-07 15:39:53
>>sjducb+Z52
Isn't that more of a design issue than a bot AI issue?
◧◩◪
3. HarHar+ki2[view] [source] 2024-01-07 16:01:46
>>Solven+gf2
I'd call it an approach issue: LLM vs brute-force lookahead.

An LLM is predicting what comes next per it's training set. If it's trained on human games then it should play like a human; if it's trained on Stockfish games, then it should play more like Stockfish.

Stockfish, or any chess engine using brute force lookahead, is just trying to find the optimal move - not copying any style of play - and it's moves are therefore sometimes going to look very un-human. Imagine if the human player is looking 10-15 moves ahead, but Stockfish 40-50 moves ahead... what looks good 40-50 moves out might be quite different than what looks good to the human.

[go to top]