I'm old enough to have lived thru the hope but ultimate failure of Lenat's baby CYC. The CYC project was initiated in 1984, in the heyday of expert systems which had been successful in many domains. The idea of an expert system was to capture the knowledge and reasoning power of a subject matter expert in a system of declarative logic and rules.
CYC was going to be the ultimate expert system that captured human common sense knowledge about the world via a MASSIVE knowledge/rule set (initially estimated as a 1000 man-year project) of how everyday objects behaved. The hope was that through sheer scale and completeness it would be able to reason about the world in the same way as a human who had gained the same knowledge thru embodiment and interaction.
The CYC project continued for decades with a massive team of people encoding rules according to it's own complex ontology, but ultimately never met it's goals. In retrospect it seems the idea was doomed to failure from the beginning, but nonetheless it was an important project that needed to be tried. The problem with any expert system reasoning over a fixed knowledge set is that it's always going to be "brittle" - it may perform well for cases wholly within what it knows about, but then fail when asked to reason about things where common sense knowledge and associated extrapolation of behavior is required; CYC was hoping to avoid this via scale to be so complete that there were no important knowledge gaps.
I have to wonder if LLM-based "AI's" like GPT-4 aren't in some ways very similar to CYC in that they are ultimately also giant expert systems, but with the twist that they learnt their knowledge, rules and representations/reasoning mechanisms from a training set rather than it having to be laboriously hand entered. The end result is much he same though - an ultimately brittle system who's Achille's heel is that it is based on a fixed set of knowledge rather than being able to learn from it's own mistakes and interact with the domain it is attempting to gain knowledge over. It seems there's a similar hope to CYC of scaling these LLM's up to the point that they know everything and the brittleness disappears, but I suspect that ultimately that will prove a false hope and real AI's will need to learn through experimentation just as we do.
RIP Doug Lenat. A pioneer of the computer age and of artificial intelligence.
1. Recognizing that AI was a scale problem.
2. Understanding that common sense was the core problem to solve.
Although you say Cyc couldn't do common sense reasoning, wasn't that actually a major feature they liked to advertise? IIRC a lot of Cyc demos were various forms of common sense reasoning.
I once played around with OpenCyc back when that was a thing. It was interesting because they'd had to solve a lot of problems that smaller more theoretical systems never did. One of their core features is called microtheories. The idea of a knowledge base is that it's internally consistent and thus can have formal logic be performed on it, but real world knowledge isn't like that. Microtheories let you encode contradictory knowledge about the world, in such a way that they can layer on top of the more consistent foundation.
A very major and fundamental problem with the Cyc approach was that the core algorithms don't scale well to large sizes. Microtheories were also a way to constrain the computational complexity. LLMs work partly because people found ways to make them scale using GPUs. There's no equivalent for Cyc's predicate logic algorithms.