[0] https://writings.stephenwolfram.com/2023/09/remembering-doug...
https://github.com/orgs/stardog-union/
Looks like Knowledge Graph and semantic reasoner are the search terms du'jour, I haven't tracked these things since OpenCyc stopped being active.
Humans may not be able to effectively trudge through the creation of trillions of little rules and facts needed for an explicit and coherent expert world model, but LLMs definitely can be used for this.
> Perhaps their time will come again.
That's pretty sure, as soon as the hype about LLMs has calmed down. I hope that Cyc's data will then still be available, ideally open-source.
> https://muse.jhu.edu/pub/87/article/853382/pdf
Unfortunately paywalled; does anyone have a downloadable copy?
I'd bet, judging mostly from my failed attempts at playing with OpenCyc around 2009, is that the Cyc has always been too closed and to complex to tinker with. That doesn't play nicely with academic work. When people finish their PhDs and start working for OpenAI, they simply don't have Cyc in their toolbox.
[1] https://www.sciencedirect.com/science/article/pii/S089360802...
My approach, Cyc's, and others are fundamentally flawed for the same reason. There's a low level reason why deep nets work and symbolic engines are very bad.
[1] https://voidfarer.livejournal.com/623.html
You can label it "bad idea" but you can't bring LLMs back in time.
"I wonder what is the closest thing to Cyc we have in the open source realm right now?".
See:
https://github.com/therohk/opencyc-kb
https://github.com/bovlb/opencyc
https://github.com/asanchez75/opencyc
Outside of that, you have the entire world of Semantic Web projects, especially things like UMBEL[1], SUMO[2], YAMATO[3], and other "upper ontologies"[4] etc.
[1]: https://en.wikipedia.org/wiki/UMBEL
[2]: https://en.wikipedia.org/wiki/Suggested_Upper_Merged_Ontolog...
Cyc was able to produce an impact, I keep pointing to MathCraft [1] which, at 2017, did not have a rival in the neural AI.
[1] https://www.width.ai/post/what-is-beam-search
It is possible to even have 3-gram model to output better text predictions if you combine it with the beam search.
FYI: here are the release notes of the recently release Allegro CL 11.0: https://franz.com/support/documentation/current/release-note...
IIRC, Cyc gets delivered on other platforms&languages (C, JVM, ... ?). Would be interesting to know what they use for deployment/delivery.
The lead author on [1] is Kathy Panton who has no publications after that and zero internet presence as far as i can tell.
[1] Common Sense Reasoning – From Cyc to Intelligent Assistant https://iral.cs.umbc.edu/Pubs/FromCycToIntelligentAssistant-...
Expert systems were so massively oversold... and it's not at all clear that any of the "super fantastic expert" systems ever did what was claimed of them.
We definitely found out that they were, in practice, extremely difficult to build and make do anything reasonable.
The original paper on Eurisko, for instance, mentioned how the author (and founder of Cyc!) Douglas Lenat, during a run, went ahead and just hand-inserted some knowledge/results of inferences (it's been a long while since I read the paper, sorry), asserting, "Well, it would have figured these things out eventually!"
Later on, he wrote a paper titled, "Why AM and Eurisko appear to work" [0].
0: https://aaai.org/papers/00236-aaai83-059-why-am-and-eurisko-...
That's not true
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10425828/
>Why did all these (slice of) world model approaches dead?
Because they don't work
Not yet. It's still early days.
> What other approaches exist?
Loosely speaking, I'd say this entire discussion falls into the general rubric of what people are calling "neuro-symbolic AI". Now within that there are a lot of different ways to try and combine different modalities. There are things like DeepProbLog, LogicTensorNetworks, etc.
For anybody who wants to learn more, consider starting with:
https://en.wikipedia.org/wiki/Neuro-symbolic_AI
and the videos from the previous two "Neurosymbolic Summer School" events:
FWIW, KG's don't have to be brittle. Or, at least they don't have to be as brittle as they've historically been. There are approaches (like PROWL[1]) to making graphs probabilistic so that they're asserting subjective beliefs about statements, instead of absolute statements. And then the strength of those beliefs can increase or decrease in response to new evidence (per Bayes Theorem). Probably the biggest problem with this stuff is that it tends to be crazy computationally expensive.
Still, there's always the chance of an algorithmic breakthrough or just hardware improvements bringing some of this stuff into the real of practical.
https://en.wikipedia.org/wiki/Open_Mind_Common_Sense
https://en.wikipedia.org/wiki/Mindpixel
The leaders of both these projects committed suicide.
One of the immediate things I'm working on is a text to knowledge graph system. Yohei (creator of BabyAGI) is also working on text to knowledge graphs: https://twitter.com/yoheinakajima/status/1769019899245158648. LlamaIndex has a basic implementation.
This isn't quite connecting the system to an automated reasoner though. There is some research in this area, like: >>35735375
Cyc + LLMs is vaguely related to more advanced "cognitive architectures" for AI, for instance see the world model in Davidad's architecture, which LLMs can be used to help build: https://www.lesswrong.com/posts/jRf4WENQnhssCb6mJ/davidad-s-...
I was one of the developers/knowledge engineers of the SpinPro™ Ultracentrifugation Expert System at Beckman Instruments, Inc. This was released in 1986, developed over about 2 years. This ran on an IBM PC (DOS)! This was a technical success, but not a commercial one. (The sales force was unfamiliar with promoting a software product, and which had little impact on their commissions vs. selling multi-thousand dollar equipment.) https://pubs.acs.org/doi/abs/10.1021/bk-1986-0306.ch023 (behind ACS paywall)
Our second Expert System was PepPro™, which designed procedures for the chemical synthesis of peptides (essentially very small proteins). This was completed and to be released in 1989, but Beckman discontinued their peptide synthesis instrument product line just two months before. This system was able to integrate end-user knowledge with the built-in domain knowledge. PepPro was recognized in the first AAAI Conference on Innovative Applications of Artificial Intelligence in 1989. https://www.aaai.org/Papers/IAAI/1989/IAAI89-010.pdf
Both of these were developed in Interlisp-D on Xerox 1108/1186 workstations, using an in-house expert system development environment, and deployed in Gold Hills Common Lisp for the PC.