zlacker

[return to "Chess-GPT's Internal World Model"]
1. sinuhe+4Z[view] [source] 2024-01-07 00:58:10
>>homarp+(OP)
World model might be a too big word here. When we talk of a world model (in the context of AI motels), we refer to its understanding of the world, at least in the context we trained it. But what I see is just a visualization of the output in a fashion similar to a chess board. A stronger evidence would be a for example a map of the next move, which will show whether it truly understood the game’s rules. If it show probability larger than zero on illegal board fields, it will show us why it sometimes makes illegal moves. And obviously, it didn’t fully understand the rules of the game.
◧◩
2. canjob+i11[view] [source] 2024-01-07 01:18:48
>>sinuhe+4Z
No, it is not a visualization of the output, it is a visualization of the information about pawn position contained in the model’s internal state.
[go to top]