CYC was an interesting experiment though. Even though it might have been expected to be brittle due to the inevitable knowledge gaps/etc, it seems there was something more fundamentally wrong with the approach for it not to have been more capable. An LLM could also be regarded as an expert system of sorts (learning its own rules from the training data), but some critical differences are perhaps that the LLM's rules are as much about recognizing context for when to apply a rule as what the rule itself is doing, and the rules are generative rather than declarative - directly driving behavior rather than just deductive closure.