From the Wikipedia page on one of the strongest ever[1]: "Like Leela Zero and AlphaGo Zero, Leela Chess Zero starts with no intrinsic chess-specific knowledge other than the basic rules of the game. Leela Chess Zero then learns how to play chess by reinforcement learning from repeated self-play"
Leela, by contrast, requires a specialized structure of iterative tree searching to generate move recommendations: https://lczero.org/dev/wiki/technical-explanation-of-leela-c...
Which is not to diminish the work of the Leela team at all! But I find it fascinating that an unmodified GPT architecture can build up internal neural representations that correspond closely to board states, despite not having been designed for that task. As they say, attention may indeed be all you need.
>> As they say, attention may indeed be all you need.
I don't think drawing general conclusions about intelligence from a board game is warranted. We didn't evolve to play chess or Go.
sometimes it is not a matter of "is it better? is it larger? is it more efficient?", but just a question.
mountains are mountains, men are men.
I am pretty sure a bunch of matrix multiplications can't intuit anything.
naively, it doesn't seem very surprising that enormous amounts of self play cause the internal structure to reflect the inputs and outputs?
Pretty shit for a computer. He says his 50m model reached 1800 Elo (by the way, its Elo and not ELO as the article incorrectly has it, it is named after a Hungarian guy called Elo). It seems to be a bit better than Stockfish level 1 and a bit worse than Stockfish level 2 from the bar graph.
Based on what we know I think its not surprising these models can learn to play chess, but they get absolutely smoked by a "real" chess bot like Stockfish or Leela.
Right. Wait, are you talking about AI or humans?
What's kind of amazing is that, in doing so, it actually learns to play chess! That is, the model weights naturally organize into something resembling an understanding of chess, just by trying to minimize error on next-token prediction.
It makes sense, but it's still kind of astonishing that it actually works.
I don't understand how people can say things like this when universal approximation is an easy thing to prove. You could reproduce Magnus Carlsen's exact chess-playing stochastic process with a bunch of matrix multiplications and nonlinear activations, up to arbitrarily small error.
According to figure 6b [0] removing MCTS reduces Elo by about 40%, scaling 1800 Elo by 5/3 gives us 3000 Elo which would be superhuman but not as good as e.g. LeelaZero.
[0]: https://gwern.net/doc/reinforcement-learning/model/alphago/2...
This goes both ways by the way. I could be convinced that LLMs can achieve something the likes of intuition, but I strongly believe that it is a very different kind of intuition than we normally associate with humans/animals. Usins the same label is thus potentially confusing, and (human pride aside) might even prevent us from appreciating the full scope of what LLMs are capable of.
It's still too strong a claim given that matrix multiplication also describes quantum mechanics and by extension chemistry and by extension biology and by extension our own brains… but I frequently encounter examples of mistaking two related concepts for synonyms, and I assume in this case it is meant to be a weaker claim about LLMs not being conscious.
Me, I think the word "intuition" is fine, just like I'd say that a tree falling in a forest with no one to hear it does produce a sound because sound is the vibration of the air instead of the qualia.
It's the active, iterative thinking and planning that is more critical for AGI and, while obviousky theoretically possible, much harder to imagine a neural network performing.
If someone came to the table with "intuition is the process of a system inferring a likely outcome from given inputs by the process X - not to be confused with matmultuition which is process Y", that might be a reasonable proposal.
That's not a problem. You can show that neural network induced functions are dense in a bunch of function spaces, just like continuous functions. Regularity is not a critical concern anyways.
>functions vs algorithms
Repeatedly applying arbitrary functions to a memory (like in a transformer) yields you arbitrary dynamical systems, so we can do algorithms too.
> an approximator being possible and us knowing how to construct it are very different things,
This is of course the critical point, but not so relevant when asking whether something is theoretically possible. The way I see it this was the big question for deep learning and over the last decade the evidence has just continually grown that SGD is VERY good at finding weights that do in fact generalize quite well and that don't just approximate a function from step-functions the way you imagine an approximation theorem to construct it, but instead efficiently find features in the intermediate layers and use them for multiple purposes, etc. My intuition is that the gradient in high dimension doesn't just decrease the loss a bit in the way we imagine it for a low dimensional plot, but in those high dimensions really finds directions that are immensely efficient at decreasing loss. This is how transformers can become so extremely good at memorization.
More likely, the 16 million games just has most of the piece move combinations. It does not know a knight moves in an L. It knows from each square where a knight can move based on 16 million games.
The representation of the ruleset may not be the optimal Kolmogorov complexity - but for an experienced human player who can glance at a board and know what is and isn’t legal, who is to say that their mental representation of the rules is optimizing for Kolmogorov complexity either?
1. https://en.m.wikipedia.org/wiki/Shannon_number#:~:text=After....
We also don't know what internal representations of the state of play it's using other than what the author has discovered via probes... Maybe it has other representations effectively representing where pieces are (or what they may do next) other than just the board position.
I'm guessing that it's just using all of it's learned representations to recognize patterns where, for example, Nf3 and Nh3 are both statistically likely, and has no spatial understanding of the relationship of these moves.
I guess one way to explore this would be to generate a controlled training set where each knight only ever makes a different subset of it's legal (up to) 8 moves depending on which square it is on. Will the model learn a generalization that all L-shaped moves are possible from any square, or will it memorize the different subset of moves that "are possible" from each individual square?
Maybe I misread something as I only skimmed, but the pretty weak Elo would most definitely suggest a failure of intuiting rules.
A different way to test the internal state of the model would be to score all possible valid and invalid moves at every position and see how the probabilities of these moves would change as a function of the player's ELO rating. One would expect that invalid moves would always score poorly independent of ELO, whereas valid moves would score monotonically with how good they are (as assessed by Stockfish) and that the player's ELO would stretch that monotonic function to separate the best moves from the weakest moves for a strong player.
You’re really wishing a lot more in to AI than is actually there.
Certainly a lot of folks wish more into kids than is really there.
That is literally, literally, what it does.
One may argue that it does so wrongly, but that's a different claim entirely.
> there’s no reason to imply they do
The predictions matching reality to the best of our collective abilities to test them is such a reason.
The saying that "all models are wrong but some are useful" is a reason against that.
I don't think it makes sense to talk of the model (potentially) knowing that knights make L-shaped moves (i.e. 2 squares left or right, plus 1 square up or down, or vice versa) unless it is able to add/subtract row/column numbers to be able to determine the squares it can move to on the basis of this (hypothetical) L-shaped move knowledge.
Being able to do row/column math is essentially what I mean by spatial representation - that it knows the spatial relationships between rows ("1"-"8") and columns ("a"-"h"), such that if it had a knight on e1 it could then use this L-shaped move knowledge to do coordinate math like e1 + (1,2) = f3.
I rather doubt this is the case. I expect the board representation is just a map from square name (not coordinates) to piece on that square, and that generated moves likely are limited to those it saw the piece being moved make when it had been on the same square during training - i.e. it's not calculating possible, say, knight destinations base on an L-shaped move generalization, but rather "recalling" a move it had seen during training when (among other things) it had a knight on a given square.
Somewhat useless speculation perhaps, but would seem simple and sufficient, and an easy hypothesis to test.
Positions analyzed per $, per W and per watt-dollar are surely much, much higher, though ;)