I suspect that McCarthy was on to something with the context thing. Organic intelligence certainly fails in creative ways without context it would not be disqualifying to have AI fail in similarly spectacular ways.
[1] I made a bit of progress on this considering it to be the permeability for progress such that the higher the weight the easier it was to 'pass thorough' this particular neuron but the cyclic nature of the graph makes a purely topological explanation pretty obtuse :-).
https://distill.pub/2020/circuits/ https://transformer-circuits.pub/2025/attribution-graphs/bio...
Like the rock salad you're mixing up two disparate contexts here. Symbolic AI like SAT solvers and planners is not trying to learn from data and there's no context in which it has to "scale with huge data".
Instead, what modern SAT solvers and planners do is even harder than "scaling with data" - which, after all, today means having imba hardware and using it well. SAT solving and planning can't do that: SAT is NP-complete and planning is PSPACE-complete so it really doesn't matter how much you "scale" your hardware, those are not problems you can solve by scaling, ever.
And yet, today both SAT and planning are solved problems. NP complete? Nowadays, that's a piece of cake. There are dedicated solvers for all the classical sub-categories of SAT and modern planners can solve planning problems that require sequences of thousands of actions. Hell, modern planners can even play Atari games from pixels alone, and do very well indeed [1].
So how did symbolic AI manage those feats? Not with bigger computers but precisely with the approach that the article above seems to think has failed to produce any results: heuristic search. In SAT solving, the dominant approach is an algorithm called "Conflict Driven Clause Learning", that is designed to exploit the special structure of SAT problems. In Planning and Scheduling, heuristic search was always used, but work really took off in the '90s when people realised that they could automatically estimate a heuristic cost function from the structure of a planning problem.
There are parallel and similar approaches everywhere you look at, in classical AI problems, like verification, theorem proving, etc, and that work has even produced a few Turing awards [2]. But do you hear about that work at all, when you hear about AI research? No, because it works, and so it's not AI.
But it works, it runs on normal hardware, it doesn't need "scale" and it doesn't need data. You're measuring the wrong thing with the wrong stick.
____________
[1] Planning with Pixels in (Almost) Real Time: https://arxiv.org/pdf/1801.03354 Competitive results with humans and RL. Bet you didn't know that.
[2] E.g. Pnueli for temporal logic in verification, or Clarke, Emerson and Sifakis, for model checking.
Symbolic AI have not had a privilege to be applied or "trained" with huge data. 30 millions assertions is not a big number.
Actually, they do. Conflict-Driven Clause Learning (CDCL) learns from conflicts encountered during working on the data. The space of inputs they are dealing with oftentimes is in the order of the number of atoms in Universe and that is huge.
CYC was an interesting experiment though. Even though it might have been expected to be brittle due to the inevitable knowledge gaps/etc, it seems there was something more fundamentally wrong with the approach for it not to have been more capable. An LLM could also be regarded as an expert system of sorts (learning its own rules from the training data), but some critical differences are perhaps that the LLM's rules are as much about recognizing context for when to apply a rule as what the rule itself is doing, and the rules are generative rather than declarative - directly driving behavior rather than just deductive closure.
This was the quote I resonated with :-)
"... the discoveries we highlight here only capture a small fraction of the mechanisms of the model."
It sometimes feels a bit like papers on cellular biology with DNA discussions in which descriptions of the enzymes and proteins involved are insightful but the mechanism that operates the reaction remains opaque.
Neural networks, not LLMs in particular, were just about the simplest thing that could scale - they scaled and everything else has been fine-tuning. Symbolic AI basically begins with existing mathematical models of reality and of human reason and indeed didn't scale.
The problem imo is: The standard way mathematical modeling works[2] is you have a triple of <data, model-of-data, math-formalism>. The math formalism characterizes what the data could be, how data diverges from reality etc. The trouble is that the math formalism really doesn't scale even if a given model scales[3]. So even if you were to start plugging numbers into some other math model and get a reality-approximation like an LLM, it would be a black box like an LLM because the meta-information would be just as opaque.
Consider the way Judea Pearl rejected confidence intervals and claimed probabilities were needed as the building blocks for approximate reasoning systems. But a look at human beings, animals or LLMs shows that things that "deal with reality" don't have and couldn't access to "real" probabilities.
I'd just offer that I believe that for a model to scale, the vast majority of it's parameters would have to be mathematically meaningless to us. And that's for the above reasons.
[1]. Really key point, imo [2]. That innclude symbolic and probabilistic model "at the end of the day" [3]. Contrast the simplicity of plugging data into a regression model versus the multitudes of approaches explaining regression and loss/error functions etc.
It seems like you are not framing NP-completeness properly. An NP complete problem is simply worst case hard. Such a problem can have many solvable instances. With some distributions of randomly selected SAT problem, most instances can be quickly solvable. SAT solving contests often involve hand-constructed SATs translated from other domains and the entrants similarly add methods for these "special cases". So NP-completeness isn't a barrier to SAT-solvers scaling by itself.
This is an important point. Hard "AI" problems are no longer "AI" once we have good algorithms and/or heuristics to solve them.
Resolution can be used inductively, and also for abduction, but that's going into the weeds a bit- it's the subject of my PhD thesis. Let me know if you're in the mood for a proper diatribe :)
[1] https://www.britannica.com/dictionary/learning
[2] https://en.wikipedia.org/wiki/Learning
"Learning" in CDCL is perfectly in line of "gaining knowledge."[1] https://www.cs.cmu.edu/~mheule/publications/prencode.pdf
You know, this seems like yet another reason to allow HN users to direct message each other, or at least receive reply notifications. Dang, why can't we have nice things?
Oh gosh I gotta do some work today, so no time to write what I wanted. Maybe watch this space? I'll try to make some time later today.