zlacker

[parent] [thread] 0 comments
1. joe_th+(OP)[view] [source] 2025-04-09 19:15:10
LLMs did scale with huge data, symbolic AI did not. So why? [1]

Neural networks, not LLMs in particular, were just about the simplest thing that could scale - they scaled and everything else has been fine-tuning. Symbolic AI basically begins with existing mathematical models of reality and of human reason and indeed didn't scale.

The problem imo is: The standard way mathematical modeling works[2] is you have a triple of <data, model-of-data, math-formalism>. The math formalism characterizes what the data could be, how data diverges from reality etc. The trouble is that the math formalism really doesn't scale even if a given model scales[3]. So even if you were to start plugging numbers into some other math model and get a reality-approximation like an LLM, it would be a black box like an LLM because the meta-information would be just as opaque.

Consider the way Judea Pearl rejected confidence intervals and claimed probabilities were needed as the building blocks for approximate reasoning systems. But a look at human beings, animals or LLMs shows that things that "deal with reality" don't have and couldn't access to "real" probabilities.

I'd just offer that I believe that for a model to scale, the vast majority of it's parameters would have to be mathematically meaningless to us. And that's for the above reasons.

[1]. Really key point, imo [2]. That innclude symbolic and probabilistic model "at the end of the day" [3]. Contrast the simplicity of plugging data into a regression model versus the multitudes of approaches explaining regression and loss/error functions etc.

[go to top]