Wikipedia's overview: <https://en.wikipedia.org/wiki/Cyc>
Project / company homepage: <https://cyc.com/>
It's failure is no shade against Doug. Somebody had to try it, and I'm glad it was one of the brightest guys around. I think he clung on to it long after it was clear that it wasn't going to work out, but breakthroughs do happen. (The current round of machine learning itself is a revival of a technique that had been abandoned, but people who stuck with it anyway discovered the tricks that made it go.)
Cyc is sort of like that, but for everything. Not just a small limited world. I believe it didn’t work out because it’s really hard.
So you'd use the NN to recognize that the thing in front of the camera is a cat, and that would be fed into the symbolic knowledge base for further reasoning.
The knowledge base will contain facts like the cat is likely to "meow" at some point, especially if it wants attention. Based on the relevant context, the knowledge base would also know that the cat is unlikely to be able to talk, unless it is a cat in a work of fiction, for example.
Leela AI was founded by Henry Minsky and Cyrus Shaoul, and is inspired by ideas about child development by Jean Piaget, Seymour Papert, Marvin Minsky, and Gary Drescher (described in his book “Made-Up Minds”).
https://mitpress.mit.edu/9780262517089/made-up-minds/
>Leela Platform is powered by Leela Core, an innovative AI engine based on research at the MIT Artificial Intelligence Lab. With its dynamic combination of traditional neural networks for pattern recognition and causal-symbolic networks for self-discovery, Leela Core goes beyond accurately recognizing objects to comprehend processes, concepts, and causal connections.
>Leela Core is much faster to train than conventional NNs, using 100x less data and enabling 10x less time-to-value. This highly resilient AI can quickly adjust to changes and explain what it is sensing and doing via the Leela Viewer dashboard. [...]
The key to regulating AI is explainability. The key to explainability may be causal AI.
https://leela.ai/post/the-key-to-regulating-ai-is-explainabi...
>[...] For example, the Leela Core engine that drives the Leela Platform for visual intelligence in manufacturing adds a symbolic causal agent that can reason about the world in a way that is more familiar to the human mind than neural networks. The causal layer can cross-check Leela Core's traditional NN components in a hybrid causal/neural architecture. Leela Core is already better at explaining its decisions than NN-only platforms, making it easier to troubleshoot and customize. Much greater transparency is expected in future versions. [...]