zlacker

Cyc: History's Forgotten AI Project

submitted by iafish+(OP) on 2024-04-17 19:46:46 | 287 points 141 comments
[view article] [source] [go to bottom]

NOTE: showing posts with links only show all posts
1. rhodin+ce[view] [source] 2024-04-17 21:21:51
>>iafish+(OP)
Related Stephen Wolfram's note when Doug Lenat passed away [0]

[0] https://writings.stephenwolfram.com/2023/09/remembering-doug...

4. toisan+0g[view] [source] 2024-04-17 21:34:33
>>iafish+(OP)
I would love to see a Cyc 2.0 modeled in the age of LLMs. I think it could be very powerful, especially to help deal with hallucinations. I would love to see a causality engine built with LLMs and Cyc. I wrote some notes on it before ChatGPT came out: https://blog.jtoy.net/understanding-cyc-the-ai-database/
◧◩
6. observ+6h[view] [source] [discussion] 2024-04-17 21:41:16
>>mepian+pe
OWL and SPARQL inference engines that use RDF and DSMs - there are LISPy variants like datadog still kicking around, but there are some great, high performance reasoner FOSS projects, like StarDog or Neo4j

https://github.com/orgs/stardog-union/

Looks like Knowledge Graph and semantic reasoner are the search terms du'jour, I haven't tracked these things since OpenCyc stopped being active.

Humans may not be able to effectively trudge through the creation of trillions of little rules and facts needed for an explicit and coherent expert world model, but LLMs definitely can be used for this.

7. Rochus+Ri[view] [source] 2024-04-17 21:55:48
>>iafish+(OP)
Interesting article, thanks.

> Perhaps their time will come again.

That's pretty sure, as soon as the hype about LLMs has calmed down. I hope that Cyc's data will then still be available, ideally open-source.

> https://muse.jhu.edu/pub/87/article/853382/pdf

Unfortunately paywalled; does anyone have a downloadable copy?

◧◩
9. zozbot+Jj[view] [source] [discussion] 2024-04-17 22:01:39
>>rhodin+ce
Discussed >>37402925 see also >>37354000
15. avodon+5m[view] [source] 2024-04-17 22:20:35
>>iafish+(OP)
Related: https://blog.funcall.org//lisp/2024/03/22/eurisko-lives/
◧◩◪
17. mepian+wm[view] [source] [discussion] 2024-04-17 22:24:02
>>pfdiet+8m
Yes: https://cyc.com/archives/glossary/subl/

Also see: https://www.youtube.com/watch?v=cMMiaCtOzV0

20. mtrave+Ap[view] [source] 2024-04-17 22:45:49
>>iafish+(OP)
I worked on Cyc as a visiting student for a couple of summers; built some visualization tools to help people navigate around the complex graph. But I never was quite sold on the project, some tangential learnings here: https://hyperphor.com/ammdi/alpha-ontologist
◧◩
29. tunesm+Hu[view] [source] [discussion] 2024-04-17 23:26:59
>>mepian+pe
At a much lower level, I've been having fun hacking away at my Concludia side project over time. It's purely proposition level and will eventually support people being able to create their own arguments and contest others. http://concludia.org/
◧◩◪◨
31. radomi+jy[view] [source] [discussion] 2024-04-17 23:55:04
>>tkgall+Zt
From time to time, I read articles on the boundary between neural nets and knowledge graphs like a recent [1]. Sadly, no mention of Cyc.

I'd bet, judging mostly from my failed attempts at playing with OpenCyc around 2009, is that the Cyc has always been too closed and to complex to tinker with. That doesn't play nicely with academic work. When people finish their PhDs and start working for OpenAI, they simply don't have Cyc in their toolbox.

[1] https://www.sciencedirect.com/science/article/pii/S089360802...

◧◩
32. breck+yy[view] [source] [discussion] 2024-04-17 23:56:30
>>mepian+pe
I tried to make something along these lines (https://truebase.treenotation.org/).

My approach, Cyc's, and others are fundamentally flawed for the same reason. There's a low level reason why deep nets work and symbolic engines are very bad.

◧◩
41. dmd+0B[view] [source] [discussion] 2024-04-18 00:16:31
>>Rochus+Ri
https://jumpshare.com/s/X5arOz0ld3AzBGQ47hWf
◧◩
43. thesz+SB[view] [source] [discussion] 2024-04-18 00:24:07
>>blueye+Fp
Lenat was able to produce superhuman performing AI in the early 1980s [1].

[1] https://voidfarer.livejournal.com/623.html

You can label it "bad idea" but you can't bring LLMs back in time.

◧◩◪
56. mindcr+rT[view] [source] [discussion] 2024-04-18 03:40:30
>>gumby+yj
Yep. And it may be just a subset, but it's pretty much the answer to

"I wonder what is the closest thing to Cyc we have in the open source realm right now?".

See:

https://github.com/therohk/opencyc-kb

https://github.com/bovlb/opencyc

https://github.com/asanchez75/opencyc

Outside of that, you have the entire world of Semantic Web projects, especially things like UMBEL[1], SUMO[2], YAMATO[3], and other "upper ontologies"[4] etc.

[1]: https://en.wikipedia.org/wiki/UMBEL

[2]: https://en.wikipedia.org/wiki/Suggested_Upper_Merged_Ontolog...

[3]: https://ceur-ws.org/Vol-2050/FOUST_paper_4.pdf

[4]: https://en.wikipedia.org/wiki/Upper_ontology

◧◩◪
57. jimmyS+NV[view] [source] [discussion] 2024-04-18 04:14:26
>>nextos+EI
There is MindAptive who have something about symbolics as a kind of machine language interface that I think went the other way as in trying to do everything under the sun but its the last time I came across anything reminding me of Cyc

https://mindaptiv.com/intro-to-wantware/

◧◩◪◨
74. thesz+d81[view] [source] [discussion] 2024-04-18 06:56:40
>>goatlo+6Q
I guess it is funding. Compare funding of Google+Meta to what was/is available to Cyc.

Cyc was able to produce an impact, I keep pointing to MathCraft [1] which, at 2017, did not have a rival in the neural AI.

[1] https://en.wikipedia.org/wiki/Cyc#MathCraft

◧◩◪◨⬒
76. thesz+391[view] [source] [discussion] 2024-04-18 07:06:20
>>eru+wX
Usually, LLM's output gets passed through beam search [1] which is as symbolic as one can get.

[1] https://www.width.ai/post/what-is-beam-search

It is possible to even have 3-gram model to output better text predictions if you combine it with the beam search.

◧◩◪◨⬒
78. lispm+ya1[view] [source] [discussion] 2024-04-18 07:21:15
>>pfdiet+Ip
Maybe it's not "technical issues", but features and support? Allegro CL has a proven GUI toolkit, for example, and now they moved it into the web browser.

FYI: here are the release notes of the recently release Allegro CL 11.0: https://franz.com/support/documentation/current/release-note...

IIRC, Cyc gets delivered on other platforms&languages (C, JVM, ... ?). Would be interesting to know what they use for deployment/delivery.

81. carlsb+6g1[view] [source] 2024-04-18 08:32:06
>>iafish+(OP)
The Cyc project proposed the idea of software "assistants" : formally represented knowledge based on a shared ontology, reasoning systems that can draw on that knowledge, handle tasks and anticipate the need to perform them.[1]

The lead author on [1] is Kathy Panton who has no publications after that and zero internet presence as far as i can tell.

[1] Common Sense Reasoning – From Cyc to Intelligent Assistant https://iral.cs.umbc.edu/Pubs/FromCycToIntelligentAssistant-...

83. Silver+Pg1[view] [source] 2024-04-18 08:45:04
>>iafish+(OP)
I first heard about Cyc's creator Douglas Lenat a few months back when I watched an old talk by Richard Feynman.

https://youtu.be/ipRvjS7q1DI?si=fEU1zd6u79Oe4SgH&t=675

◧◩
85. dredmo+Ai1[view] [source] [discussion] 2024-04-18 09:07:45
>>astran+g71
Portland Pattern Repository?

<https://c2.com/ppr/>

<https://en.wikipedia.org/wiki/Portland_Pattern_Repository>

◧◩
86. dredmo+Ji1[view] [source] [discussion] 2024-04-18 09:09:05
>>Silver+Pg1
<https://yewtu.be/watch?v=ipRvjS7q1DI&t=675>
◧◩◪◨⬒⬓
88. eru+ak1[view] [source] [discussion] 2024-04-18 09:27:32
>>thesz+391
See >>40073039 for a discussion.
◧◩◪◨
98. genril+tL1[view] [source] [discussion] 2024-04-18 13:37:36
>>galaxy+0q1
Based on the article, it seems like the Cyc had ways to deal with inconsistency. I don't know the details of how they did it, but Paraconsistant Logics [0] provide a general way to prevent any statement from being provable from an inconsistency.

[0] https://en.wikipedia.org/wiki/Paraconsistent_logic

◧◩
100. chris_+nU1[view] [source] [discussion] 2024-04-18 14:26:25
>>blackl+tD1
I started my career in 1985, building expert systems on Symbolics Lisp machines in KEE and ART.

Expert systems were so massively oversold... and it's not at all clear that any of the "super fantastic expert" systems ever did what was claimed of them.

We definitely found out that they were, in practice, extremely difficult to build and make do anything reasonable.

The original paper on Eurisko, for instance, mentioned how the author (and founder of Cyc!) Douglas Lenat, during a run, went ahead and just hand-inserted some knowledge/results of inferences (it's been a long while since I read the paper, sorry), asserting, "Well, it would have figured these things out eventually!"

Later on, he wrote a paper titled, "Why AM and Eurisko appear to work" [0].

0: https://aaai.org/papers/00236-aaai83-059-why-am-and-eurisko-...

◧◩
104. famous+m12[view] [source] [discussion] 2024-04-18 15:10:55
>>blackl+tD1
>Looks like such systems are good for generating marketing texts, but can not be used as diagnosticians by definition.

That's not true

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10425828/

>Why did all these (slice of) world model approaches dead?

Because they don't work

◧◩◪◨
106. mindcr+g32[view] [source] [discussion] 2024-04-18 15:22:58
>>ragebo+Z12
> Can I read anywhere about what you're working on?

Not yet. It's still early days.

> What other approaches exist?

Loosely speaking, I'd say this entire discussion falls into the general rubric of what people are calling "neuro-symbolic AI". Now within that there are a lot of different ways to try and combine different modalities. There are things like DeepProbLog, LogicTensorNetworks, etc.

For anybody who wants to learn more, consider starting with:

https://en.wikipedia.org/wiki/Neuro-symbolic_AI

and the videos from the previous two "Neurosymbolic Summer School" events:

https://neurosymbolic.github.io/nsss2023/

https://www.neurosymbolic.org/summerschool.html (2022)

◧◩
113. mindcr+5r2[view] [source] [discussion] 2024-04-18 17:35:03
>>blueye+Fp
They are brittle in the face of change, and useless if they cannot represent the underlying changes of reality.

FWIW, KG's don't have to be brittle. Or, at least they don't have to be as brittle as they've historically been. There are approaches (like PROWL[1]) to making graphs probabilistic so that they're asserting subjective beliefs about statements, instead of absolute statements. And then the strength of those beliefs can increase or decrease in response to new evidence (per Bayes Theorem). Probably the biggest problem with this stuff is that it tends to be crazy computationally expensive.

Still, there's always the chance of an algorithmic breakthrough or just hardware improvements bringing some of this stuff into the real of practical.

[1]: https://www.pr-owl.org/

◧◩
115. mietek+Cs2[view] [source] [discussion] 2024-04-18 17:45:12
>>blackl+tD1
Looks like we can now finally experiment with EURISKO ourselves:

>>40070667

◧◩
117. Donald+zw2[view] [source] [discussion] 2024-04-18 18:07:27
>>astran+g71
It might have been one of these two projects

https://en.wikipedia.org/wiki/Open_Mind_Common_Sense

https://en.wikipedia.org/wiki/Mindpixel

The leaders of both these projects committed suicide.

◧◩
122. brendo+vt3[view] [source] [discussion] 2024-04-19 01:42:41
>>ragebo+K31
This is a significant part of my vision for Web 10! https://www.web10.ai/p/web-10-in-under-10-minutes

One of the immediate things I'm working on is a text to knowledge graph system. Yohei (creator of BabyAGI) is also working on text to knowledge graphs: https://twitter.com/yoheinakajima/status/1769019899245158648. LlamaIndex has a basic implementation.

This isn't quite connecting the system to an automated reasoner though. There is some research in this area, like: >>35735375

Cyc + LLMs is vaguely related to more advanced "cognitive architectures" for AI, for instance see the world model in Davidad's architecture, which LLMs can be used to help build: https://www.lesswrong.com/posts/jRf4WENQnhssCb6mJ/davidad-s-...

◧◩◪
130. Rochus+ek8[view] [source] [discussion] 2024-04-20 21:48:18
>>famous+vL
Here is a recent example: https://techcrunch.com/2024/01/17/deepminds-latest-ai-can-so..., and the paper: https://www.nature.com/articles/s41586-023-06747-5
◧◩◪
141. MattHe+p3i[view] [source] [discussion] 2024-04-23 23:43:57
>>chris_+nU1
Depends on your definition of "super fantastic expert" systems.

I was one of the developers/knowledge engineers of the SpinPro™ Ultracentrifugation Expert System at Beckman Instruments, Inc. This was released in 1986, developed over about 2 years. This ran on an IBM PC (DOS)! This was a technical success, but not a commercial one. (The sales force was unfamiliar with promoting a software product, and which had little impact on their commissions vs. selling multi-thousand dollar equipment.) https://pubs.acs.org/doi/abs/10.1021/bk-1986-0306.ch023 (behind ACS paywall)

Our second Expert System was PepPro™, which designed procedures for the chemical synthesis of peptides (essentially very small proteins). This was completed and to be released in 1989, but Beckman discontinued their peptide synthesis instrument product line just two months before. This system was able to integrate end-user knowledge with the built-in domain knowledge. PepPro was recognized in the first AAAI Conference on Innovative Applications of Artificial Intelligence in 1989. https://www.aaai.org/Papers/IAAI/1989/IAAI89-010.pdf

Both of these were developed in Interlisp-D on Xerox 1108/1186 workstations, using an in-house expert system development environment, and deployed in Gold Hills Common Lisp for the PC.

[go to top]