zlacker

[parent] [thread] 1 comments
1. hhjink+(OP)[view] [source] 2023-02-09 14:47:20
I don't think that follows, necessarily. Chess has an unfathomable amount of states. While the LLM might be able to play chess competently, I would not say it has learned chess unless it is able to judge the relative strength of various moves. From my understanding, an LLM will not judge future states of a chess game when responding to such a prompt. Without that ability, it's no different than someone receiving anal bead communications from Magnus Carlsen.
replies(1): >>bioeme+O11
2. bioeme+O11[view] [source] 2023-02-09 18:19:12
>>hhjink+(OP)
An LLM could theoretically create a model with which to understand chess and predict a next move, you just need to adjust the training data and train the model until that behavior appears.

The expressiveness of language lets this be true of almost everything.

[go to top]