zlacker

Remembering Doug Lenat and his quest to capture the world with logic

submitted by andyjo+(OP) on 2023-09-06 09:23:08 | 182 points 75 comments
[view article] [source] [links] [go to bottom]
replies(13): >>richar+41 >>kensai+o9 >>specia+Ma >>Chaita+hc >>HarHar+ym >>alexpo+Ee1 >>ks2048+Ej1 >>jmj+Vo1 >>dang+Rv1 >>dekhn+Nx1 >>ansibl+vz1 >>dizzys+fx2 >>mycall+GP2
1. richar+41[view] [source] 2023-09-06 09:35:31
>>andyjo+(OP)
I have that issue of Scientific American somewhere, I didn't know Stephen had an article in it too. I'll have reread of it.
2. kensai+o9[view] [source] 2023-09-06 11:07:51
>>andyjo+(OP)
This is very fascinating. Is there somewhere a review of Cyc regarding its abilities compared to other systems?
replies(1): >>rjsw+Pd
3. specia+Ma[view] [source] 2023-09-06 11:25:45
>>andyjo+(OP)
Just perfect. So glad I read this. Thanks for sharing.

> In many ways the great quest of Doug Lenat’s life was an attempt to follow on directly from the work of Aristotle and Leibniz.

Such a wonderful, respectful retrospective of Lenat's ideas and work.

> I think Doug viewed CYC as some kind of formalized idealization of how he imagined human minds work: providing a framework into which a large collection of (fairly undifferentiated) knowledge about the world could be “poured”. At some level it was a very “pure AI” concept: set up a generic brain-like thing, then “it’ll just do the rest”. But Doug still felt that the thing had to operate according to logic, and that what was fed into it also had to consist of knowledge packaged up in the form of logic.

I've always wanted CYC, or something like it, to be correct. Like somehow it'd fulfill my need for the universe to be knowable, legible. If human reason & logic could be encoded, then maybe things could start to make sense, if only we try hard enough.

Alas.

Back when SemanticWeb was the hotness, I was a firm ontology partisan. After working on customer's use cases, and given enough time to work thru the stages of grief, I grudgingly accepted the folksonomy worldview is probably true.

Since then, of course, the "fuzzy" strategies have prevailed. (Also, most of us have accepted humans aren't rational.)

To this day, statistics based approaches make me uncomfortable, perhaps even anxious. My pragmatism motivated holistic worldview is always running up against my reductionist impulses. Paradox in a nutshell.

Enough about me.

> Doug’s starting points were AI and logic, mine were ... computation writ large.

I do appreciate Wolfram placing their respective theories in the pantheon. It's a nice reminder of their lineages. So great.

I agree with Wolfram that encoding heuristics was an experiment that had to be done. Negative results are super important. I'm so, so glad Lenat (and crews) tried so hard.

And I hope the future holds some kind of synthesis of these strategies.

replies(5): >>zozbot+cc >>skissa+Ge >>cabala+Gw >>patrec+yW >>at_a_r+n71
◧◩
4. zozbot+cc[view] [source] [discussion] 2023-09-06 11:39:28
>>specia+Ma
> And I hope the future holds some kind of synthesis of these strategies.

My guess is that by June 19, 2024 we'll be able to take 3596.6 megabytes of descriptive text about President Abraham Lincoln and do something cool with it.

replies(1): >>specia+Sc
5. Chaita+hc[view] [source] 2023-09-06 11:39:55
>>andyjo+(OP)
Great read. Surprised to read Wolfram never actually got to use CYC. Anyone here who has and can talk about its capabilities?
replies(4): >>stakha+pj >>gumby+s01 >>nvm0n2+o31 >>lispm+Zt1
◧◩◪
6. specia+Sc[view] [source] [discussion] 2023-09-06 11:45:09
>>zozbot+cc
Heh.

I was more hoping OpenAI would incorporate inference engines to cure ChatGPT's "hallucinations". Such that it'd "know" bad sex isn't better than good sex, despite the logic.

PS- I haven't actually asked ChatGPT. I'm just repeating a cliché about the limits of logic wrt the real world.

◧◩
7. rjsw+Pd[view] [source] [discussion] 2023-09-06 11:54:29
>>kensai+o9
Maybe read a bit about AM and Eurisko first, that will give an idea of how Cyc was expected to get used.
replies(1): >>cabala+Ns
◧◩
8. skissa+Ge[view] [source] [discussion] 2023-09-06 12:00:30
>>specia+Ma
> And I hope the future holds some kind of synthesis of these strategies.

Recently I’ve been involved in discussions about using an LLM to generate JSON according to a schema, as in OpenAI’s function calling or Jsonformer-LLMs do okay for generating code in mainstream languages like SQL or Python, but what if you have some proprietary query language? Maybe have a JSON schema for the AST, have the LLM generate JSON conforming to that schema, then serialise the JSON to the proprietary query language syntax?

And it makes me think - what if one used an LLM to generate or evaluate assertions in a Cyc-style ontology language? And that might be a bridge between the logic/ontology approach and the statistical/neural approach

replies(3): >>jebark+qg >>nvm0n2+341 >>thepti+wf1
◧◩◪
9. jebark+qg[view] [source] [discussion] 2023-09-06 12:13:21
>>skissa+Ge
This is similar to what people are trying for mathematical theorem proving. Using LLMs to generate theorems that can be validated in Lean.
◧◩
10. stakha+pj[view] [source] [discussion] 2023-09-06 12:30:43
>>Chaita+hc
I briefly looked into it many moons ago when I was a Ph.D. student working in the area of computational semantics in 2006-10. This was already well past the hayday of CYC though.

The first stumbling block was that CYC wasn't openly available. Their research group was very insular, and they were very protective of their IP, hoping to pay for their work through licensing deals and industry- or academic collaborations that could funnel money their way.

They had a subset called "OpenCYC" though, which they released more publicly in the hope of drawing more attention. I tried using that, but soon got frustrated with the software. The representation was in a CYC-specific language called "CycL" and the inference engine was CYC-specific as well and based on a weird description logic specifically invented for CYC. So you couldn't just hook up a first-order theorem prover or anything like that. And "description logic" is a polite term for what their software did. It seemed mostly designed as a workaround to the fact that open-ended inferencing of the kind they spoke of to motivate their work would have depended way too frequently on factoids of common sense knowledge that were missing from the knowledge base. I got frustrated with that software very quickly and eventually gave up.

This was a period of AI-winter, and people doing AI were very afraid to even use the term "AI" to describe what they were doing. People were instead saying they were doing "pattern processing with images" or "audio signal processing" or "natural language processing" or "automated theorem proving" or whatever. Any mention of "AI" made you look naive. But Lenat's group called their stuff "AI" and stuck to their guns, even at a time when that seemed a bit politically inept.

From what I gathered through hearsay, CYC were also doing things like taking a grant from the defense department, and suddenly a major proportion of the facts in the ontology were about military helicopters. But they still kept beating the drum about how they were codifying "common sense" knowledge, and, if only they could get enough "common sense" knowledge in there, they would break through a resistance level at some point, where they could have the AI program itself, i.e. use the existing facts to derive more facts by reading and understanding plain text.

replies(2): >>zozbot+7k >>Michae+fA
◧◩◪
11. zozbot+7k[view] [source] [discussion] 2023-09-06 12:33:23
>>stakha+pj
Doesn't description logic mostly boil down to multi-modal logic, which ought to be representable as a fragment of FOL (w/ quantifiers ranging over "possible worlds")?

Description logic isn't just found in Cyc, either; Semantic Web standards are based on it, for similar reasons - it's key to making general inference computationally tractable.

replies(1): >>stakha+Rm
12. HarHar+ym[view] [source] 2023-09-06 12:46:50
>>andyjo+(OP)
I missed the news of Doug Lenat's passing. He died a few days ago on August 31st.

I'm old enough to have lived thru the hope but ultimate failure of Lenat's baby CYC. The CYC project was initiated in 1984, in the heyday of expert systems which had been successful in many domains. The idea of an expert system was to capture the knowledge and reasoning power of a subject matter expert in a system of declarative logic and rules.

CYC was going to be the ultimate expert system that captured human common sense knowledge about the world via a MASSIVE knowledge/rule set (initially estimated as a 1000 man-year project) of how everyday objects behaved. The hope was that through sheer scale and completeness it would be able to reason about the world in the same way as a human who had gained the same knowledge thru embodiment and interaction.

The CYC project continued for decades with a massive team of people encoding rules according to it's own complex ontology, but ultimately never met it's goals. In retrospect it seems the idea was doomed to failure from the beginning, but nonetheless it was an important project that needed to be tried. The problem with any expert system reasoning over a fixed knowledge set is that it's always going to be "brittle" - it may perform well for cases wholly within what it knows about, but then fail when asked to reason about things where common sense knowledge and associated extrapolation of behavior is required; CYC was hoping to avoid this via scale to be so complete that there were no important knowledge gaps.

I have to wonder if LLM-based "AI's" like GPT-4 aren't in some ways very similar to CYC in that they are ultimately also giant expert systems, but with the twist that they learnt their knowledge, rules and representations/reasoning mechanisms from a training set rather than it having to be laboriously hand entered. The end result is much he same though - an ultimately brittle system who's Achille's heel is that it is based on a fixed set of knowledge rather than being able to learn from it's own mistakes and interact with the domain it is attempting to gain knowledge over. It seems there's a similar hope to CYC of scaling these LLM's up to the point that they know everything and the brittleness disappears, but I suspect that ultimately that will prove a false hope and real AI's will need to learn through experimentation just as we do.

RIP Doug Lenat. A pioneer of the computer age and of artificial intelligence.

replies(8): >>TimPC+3s >>detour+Gx >>wpietr+9Z >>nvm0n2+b21 >>brundo+qa1 >>zozbot+Uf1 >>golol+zB1 >>dragon+HR2
◧◩◪◨
13. stakha+Rm[view] [source] [discussion] 2023-09-06 12:48:39
>>zozbot+7k
I'm not trying to be dismissive of description logics. (And I'm not dismissive of Lenat and his work, either). A lot of things can fall under that umbrella term. The history of description logic may in fact be just as old as post-syllogism first-order predicate calculus (the syllogism is, of course, far older, dating back to Aristotle). In the Principia Mathematica there's a quantifier that basically means "the", which is incidentally also the most common word in the English language, and that can be thought of as a description logic too. But the perspective of a Mathematician on this is very different from that of an AI systems "practitioner", and CYC seemed to belong more to the latter tradition.
◧◩
14. TimPC+3s[view] [source] [discussion] 2023-09-06 13:16:03
>>HarHar+ym
This fail when asked about cases not wholly within what it knows about is a problem with lots of AI not just expert systems. Neural Nets mostly do awfully on problems outside their training data, assuming they can even generate an answer at all, which isn't always possible. If you train a neural net to order drinks from Starbucks and one of it's orders fails with the server telling it "We are out of Soy Milk" chances are quite high it's subsequent order will also contain Soy Milk.
◧◩◪
15. cabala+Ns[view] [source] [discussion] 2023-09-06 13:19:33
>>rjsw+Pd
My understanding of AM and Eurisko (having looked into them a decade or so ago) was that their source code hadn't been published, and that there was a dispute as to what their capabilities actually were and how much was exaggeration by Lenat.

I don't know if that's still the case. I do think that it would be worth creating systems that mix the ANN and GOFAI approaches to AI.

replies(1): >>earley+cu2
◧◩
16. cabala+Gw[view] [source] [discussion] 2023-09-06 13:38:51
>>specia+Ma
> Negative results are super important.

I agree, and this is often overlooked. Knowing what doesn't work (and why) is a massive help in searching for what does work.

◧◩
17. detour+Gx[view] [source] [discussion] 2023-09-06 13:42:50
>>HarHar+ym
I understand what you are saying. I'm able to see that brittleness as feature. The brittleness must be expressed so that the user of the model understands the limits and why the brittleness exists.

My thinking is that the next generation of computing will rely on the human bridging that brittleness gap.

replies(1): >>zozbot+hA
◧◩◪
18. Michae+fA[view] [source] [discussion] 2023-09-06 13:54:58
>>stakha+pj
That's fascinating to read, thanks for sharing.

Did it ever do something genuinely surprising? That seemed beyond the state-of-the-art at the time?

replies(1): >>stakha+yJ
◧◩◪
19. zozbot+hA[view] [source] [discussion] 2023-09-06 13:55:00
>>detour+Gx
The thing about "expert systems" is that they're just glorified database query. (And yes, you can do also 'semantic' inference in a DB simply by adding some views. It's not generally done because it's quite computationally expensive even for very simple taxonomy structures, i.e. 'A implies B which implies C and foo is A, hence foo is C'.)

Database query is of course ubiquitous, but not generally thought of as 'AI'.

◧◩◪◨
20. stakha+yJ[view] [source] [discussion] 2023-09-06 14:34:22
>>Michae+fA
One of the people from Cyc gave a talk at the research group I was in once and mentioned an idea that kind of stuck with me.

...sorry, it takes some building-up to this: At the time, a lot of work in NLP was focused on building parsers that were trying to draw constituency trees from sentences, or extract syntactic dependency structures, but do so in a way that completely abstracted away from semantics, or looked at semantics as an extension of syntax, but not venturing into the territory of inference and common sense. So, a sentence like "Green ideas sleep furiously" (to borrow from Chomsky's example), was just as good as a research object to someone doing that kind of research as a sentence that actually makes sense and is comprised of words of the same lexical categories, like "Absolute power corrupts absolutely". -- I suspect, that line of research is still going strong, so the past tense may not be quite appropriate here. I'm using it, because I have been so out of the loop since leaving academia.

The major problem these folk are facing is an exploding combinatorial space of ambiguity at the grammatical level ("I saw a man with a telescope" can be bracketed "I saw (a man) with a telescope" or "I saw a (man with a telescope)") and the semantic level ("Every man loves a woman" can mean "For every man M there exists a woman W, such that M loves W" or it can mean "There exists a woman W, such that for every man M it is true that M loves W"). Even if you could completely solve the parsing problem, the ambiguity problem would remain.

Now this guy from the Cyc group said: Forget about parsing. If you give me the words that are in the sentence and you're not even giving me any clue about how the words were used in the sentence, I can already look into my ontology and tell you how the ontology would be most likely to connect the words.

Now, the sentence "The cat chased the dog" obviously means something different from "The dog chased the cat" despite using the same words. But in most text genres, you're likely to only encounter sentences that are saying things that are commonly held as true. So if you have an ontology that tells you what's commonly held as true, that gives you a statistical prior that enables you to understand language. In fact, you probably can't hope to understand language without it, and it's probably the key to "disambiguation".

This thought kind of flipped my worldview upside down. I had always kind of thought of it as this "pipelined architecture" where you first need to parse the text, before it even makes sense to think about how to solve the problems of what to do with the output from that parser. But that was unnecessarily limiting. You can look at the problem as a joint-decoding problem, and it may very well be the case that the lion's share of entropy comes from elsewhere, and it may be foolish to go around trying to build parsers, if you haven't yet hooked up your system to the information source that provides the lion's share of entropy, namely common-sense knowledge.

Now, I don't think that Cyc had gotten particularly close to solving that problem either, and, in fact, it was a bit uncharacteristic for a "Cycler" to talk about statistical priors at all, as their work hadn't even gotten into the territory of collecting those kinds of statistics. But, as a theoretical point, I thought it was very valid.

◧◩
21. patrec+yW[view] [source] [discussion] 2023-09-06 15:28:19
>>specia+Ma
> I agree with Wolfram that encoding heuristics was an experiment that had to be done. Negative results are super important. I'm so, so glad Lenat (and crews) tried so hard.

The problem is that Doug Lenat trying very hard is only useful as a data point if you have some faith in Doug Lenat making something that is reasonably workable work by trying very hard.

Do you have a reason for thinking so? I'm genuinely curious: lots of people have positive reminiscences about Lenat, who seems to have been likeable and smart, but on my (admittedly somewhat shallow attempts) I always keep drawing blanks when looking for anything of substance he produced or some deeper insight he had (even before Cyc).

replies(3): >>mhewet+xn1 >>creer+lv1 >>specia+h92
◧◩
22. wpietr+9Z[view] [source] [discussion] 2023-09-06 15:39:45
>>HarHar+ym
> The end result is much he same though - an ultimately brittle system who's Achilles' heel is that it is based on a fixed set of knowledge

I think CYC is a great cautionary tale for LLMs in terms of hope vs reality, but I think it's worse than that. I don't think LLMs have knowledge; they just mimic the ways we're used to expressing knowledge.

replies(1): >>bionho+Fs2
◧◩
23. gumby+s01[view] [source] [discussion] 2023-09-06 15:45:16
>>Chaita+hc
Some of us who worked on Cyc commented in an earlier post about Doug's decease.
◧◩
24. nvm0n2+b21[view] [source] [discussion] 2023-09-06 15:52:09
>>HarHar+ym
Cyc was ahead of its time in a couple of ways:

1. Recognizing that AI was a scale problem.

2. Understanding that common sense was the core problem to solve.

Although you say Cyc couldn't do common sense reasoning, wasn't that actually a major feature they liked to advertise? IIRC a lot of Cyc demos were various forms of common sense reasoning.

I once played around with OpenCyc back when that was a thing. It was interesting because they'd had to solve a lot of problems that smaller more theoretical systems never did. One of their core features is called microtheories. The idea of a knowledge base is that it's internally consistent and thus can have formal logic be performed on it, but real world knowledge isn't like that. Microtheories let you encode contradictory knowledge about the world, in such a way that they can layer on top of the more consistent foundation.

A very major and fundamental problem with the Cyc approach was that the core algorithms don't scale well to large sizes. Microtheories were also a way to constrain the computational complexity. LLMs work partly because people found ways to make them scale using GPUs. There's no equivalent for Cyc's predicate logic algorithms.

replies(1): >>HarHar+5p1
◧◩
25. nvm0n2+o31[view] [source] [discussion] 2023-09-06 15:56:41
>>Chaita+hc
I played with OpenCyc once. It was quite hard to use because you had to learn things like CycL and I couldn't get their natural language processing module to work.

The knowledge base was impressively huge but it also took a lot of work to learn because at the lower levels it was extremely abstract. A lot of the assertions in the KB were establishing very low level stuff that only made sense if you were really into abstract logic or philosophy.

They made bold claims on their website for what it could do, but I could never reproduce them. There was supposedly a more advanced version called ResearchCyc though, which I didn't have access to.

replies(1): >>creer+ft1
◧◩◪
26. nvm0n2+341[view] [source] [discussion] 2023-09-06 15:59:23
>>skissa+Ge
I had the same idea last year. But it's difficult. To encode knowledge in CycL required intensive training, mostly in how their KB encoded very abstract concepts and "obvious" knowledge. They used to boast about how they had more philosophy PhDs than anywhere else.

It's possible that an LLM that's been trained on enough examples, and that's smart enough, could actually do this. But I'm not sure how you'd review the output to know if it's right. The LLM doesn't have to be much faster than you to overwhelm the capacity of reviewing the results.

◧◩
27. at_a_r+n71[view] [source] [discussion] 2023-09-06 16:13:56
>>specia+Ma
I really do believe (believe, rather than know) that some sort of synthesis is necessary, that there's some base facts and common sense that would make AI, as it stands, more reliable and trustworthy if had some kind of touchstone, rather than the slipshod "human hands come with thumbs and fingers" output we have now. Something that can look back and say, "Typically, there's just one thumb and there's four fingers. Sometimes not, but that is rare."
replies(1): >>astran+Kz3
◧◩
28. brundo+qa1[view] [source] [discussion] 2023-09-06 16:27:26
>>HarHar+ym
> The CYC project continued for decades with a massive team of people encoding rules according to it's own complex ontology, but ultimately never met it's goals

It's still going! I agree it's become clear that it probably isn't the road to AGI, but it still employs people who are still encoding rules and making the inference engine faster, paying the bills mostly by doing contracts from companies that want someone to make sense of their data warehouses

replies(1): >>Taikon+zg1
29. alexpo+Ee1[view] [source] 2023-09-06 16:48:10
>>andyjo+(OP)
This old Google EDU talk was the first time I heard of Doug Lenat.

Sad to hear:

a. of his passing

b. that CYC didn't eventually meet it's goals

https://www.youtube.com/watch?v=KTy601uiMcY

◧◩◪
30. thepti+wf1[view] [source] [discussion] 2023-09-06 16:52:29
>>skissa+Ge
This might work; you can view it as distilling the common knowledge out of the LLM.

You’d need to provide enough examples of CycL for it to learn the syntax.

But in my experience LLMs are not great at authoring code with no ground truth to test against. So the LLM might hallucinate some piece of common knowledge, and it could be hard to detect.

But at the highest level, this sounds exactly how the WolframAlpha ChatGPT plug-in works; the LLM knows how to call the plugin and can use this to generate graphs or compute numerical functions for domains where it cannot compute the result directly.

◧◩
31. zozbot+Uf1[view] [source] [discussion] 2023-09-06 16:54:11
>>HarHar+ym
> I missed the news of Doug Lenat's passing. He died a few days ago on August 31st.

Discussed >>37354000 (172 comments)

replies(1): >>HarHar+Tp1
◧◩◪
32. Taikon+zg1[view] [source] [discussion] 2023-09-06 16:57:06
>>brundo+qa1
It is? Are there success stories of companies using Cyc?

I always had the impression that Cycorp was sustained by government funding (especially military) -- and that, frankly, it was always premised more on what such software could theoretically do, rather than what it actually did.

replies(2): >>brundo+Hm1 >>isykt+AI4
33. ks2048+Ej1[view] [source] 2023-09-06 17:13:08
>>andyjo+(OP)
I wonder if CYC would have had more success if it was open and collaborative. WikiData seems like a successful cousin. I know the goals are a quite different - wikidata doesn't really store "common sense" knowledge, but it seems any rule-based AI system would probably want to use wikidata as a database of facts.
replies(3): >>zozbot+go1 >>creer+Iq1 >>brundo+Hx1
◧◩◪◨
34. brundo+Hm1[view] [source] [discussion] 2023-09-06 17:26:38
>>Taikon+zg1
They did primarily government contracts for a long time, but when I was there (2016-2020) it was all private contracts

The contracts at the time were mostly skunkworks/internal to the client companies, so not usually highly publicized. A couple examples are mentioned on their website: https://cyc.com/

◧◩◪
35. mhewet+xn1[view] [source] [discussion] 2023-09-06 17:30:29
>>patrec+yW
Lenat was my assigned advisor when I started my Masters at Stanford. I met with him once and he gave me some advice on classes. After that he was extremely difficult to schedule a meeting with (for any student, not just me). He didn't get tenure and left to join MCC after that year. I don't think I ever talked to him again after the first meeting.

He was extremely smart, charismatic, and a bit arrogant (but a well-founded arrogance). From other comments it sounds like he was pleasant to young people at Cycorp. I think his peers found him more annoying.

His great accomplishments were having a multi-decade vision of how to build an AI and actually keeping the vision alive for so long. You have to be charismatic and convincing to do that.

In the mid-80s I took his thesis and tried to implement AM on a more modern framework, but the thesis lacked so many details about how it worked that I was unable to even get started implementing anything.

BTW, if there are any historians out there I have a copy of Lenat's thesis with some extra pages including emailed messages from his thesis advisors (Minsky, McCarthy, et al) commenting on his work. I also have a number of AI papers from the early 1980s that might not be generally available.

replies(3): >>mietek+Kw1 >>eschat+9x1 >>patrec+Be2
◧◩
36. zozbot+go1[view] [source] [discussion] 2023-09-06 17:33:45
>>ks2048+Ej1
> wikidata doesn't really store "common sense" knowledge

They're actively working on this, with the goal of ultimately building a language-independent representation[0] of ordinary encyclopedic text. Much like a machine translation interlanguage, but something that would be mostly authored by humans, not auto-generated from existing natural-language text. See https://meta.wikimedia.org/wiki/Abstract_Wikipedia for more information.

[0] Of course, there are some very well-known pitfalls to this general idea: what's the true, canonical language-independent representation of nimium saepe valedīxit? So this should probably be understood as mostly language-independent, enough to be practically useful.

37. jmj+Vo1[view] [source] 2023-09-06 17:36:37
>>andyjo+(OP)
I’m working on old fashioned A.I. for my PhD. I wrote Doug a few times, he was very kind and offered very good advice. I was hoping to work with him one day.

I’ll miss you Doug.

replies(1): >>nairbo+it1
◧◩◪
38. HarHar+5p1[view] [source] [discussion] 2023-09-06 17:37:11
>>nvm0n2+b21
> IIRC a lot of Cyc demos were various forms of common sense reasoning.

I never got to try it myself, but no doubt it worked fine in those cases where correct inferences could be made based on the knowledge/rules it had! Similarly GPT-4 is extremely impressive when it's not bullshitting!

The brittleness in either case (CYC or LLMs) comes mainly from incomplete knowledge (unknown unknowns), causing an invalid inference which the system has no way to detect and correct. The fix is a closed loop system where incorrect outputs (predictions) are detected - prompting exploration and learning.

I don't know if CYC tried to do it, but one potential speed up for a system of that nature might be chunking, which is a strategy that another GOFAI system, SOAR, used successfully. A bit like using memoization (remembering results of work already done) as a way to optimize dynamic programming solutions.

◧◩◪
39. HarHar+Tp1[view] [source] [discussion] 2023-09-06 17:40:29
>>zozbot+Uf1
Thanks!
◧◩
40. creer+Iq1[view] [source] [discussion] 2023-09-06 17:44:16
>>ks2048+Ej1
I looked into it years ago and adding to, say, opencyc, really did not seem simple. There was a lot of detail in the entity descriptions. Even reading them seemed to required an awful lot of background knowledge of the system.

It may have been possible to at least add lots of parallel items / instances. For example more authors and books and music works and performers, etc. Anyone here built a system around opencyc? Or cyc?

◧◩◪
41. creer+ft1[view] [source] [discussion] 2023-09-06 17:53:55
>>nvm0n2+o31
That was exactly my reaction to it: it seemed to require sooooo much background knowledge about the entire system to do anything. And because you were warned about issues with consistency it seemed you were warned about just fudging some things. That it was a quick way to an application that couldn't work. The learning curve seemed daunting.
◧◩
42. nairbo+it1[view] [source] [discussion] 2023-09-06 17:54:07
>>jmj+Vo1
What are you working on?
◧◩
43. lispm+Zt1[view] [source] [discussion] 2023-09-06 17:57:05
>>Chaita+hc
Wolfram is able to write it in such a way that somehow it is mostly about him. :-(

There is some overlap between Cyc and his Alpha. Cyc was supposed to provide a lot of common sense knowledge, which would be reusable. When Expert Systems were a thing, one of the limiting factor were said to be limited amount of broader knowledge of the world. Knowledge a human learns by experience, interacting with the world. This would involve a lot of facts about the world and also about all kinds of exceptions (Example: a mother typically is older than its child, unless the child was adopted and the mother is younger). Cyc knows a lot of 'facts' and also many ways of logic reasoning plus many logic 'reasoning rules'.

Wolfram Alpha has a lot of knowledge about facts, often in some form of maths or somewhat structured data.

replies(1): >>dang+Zv1
◧◩◪
44. creer+lv1[view] [source] [discussion] 2023-09-06 18:03:26
>>patrec+yW
I also feel it's great and useful that Lenat and crew tried so hard. There is no doubt that a ton of work went into cyc. It was a serious, well funded, long term project and competent people put effort in making it work. And there are some descriptions of how they went about it. And opencyc was released.

But some projects - or at least the breakthroughs they produce - are highly published as papers, which can be studied by outsiders. And that is not the case of cyc. There are some reports and papers but really not many that I have found. And so it's not clear how solid or generalizable it is as a data point.

45. dang+Rv1[view] [source] 2023-09-06 18:05:23
>>andyjo+(OP)
Lenat's post about Wolfram Alpha, mentioned in the OP, was discussed (a bit) at the time:

Doug Lenat – I was positively impressed with Wolfram Alpha - >>510579 - March 2009 (17 comments)

And of course, recent and related:

Doug Lenat has died - >>37354000 - Sept 2023 (170 comments)

◧◩◪
46. dang+Zv1[view] [source] [discussion] 2023-09-06 18:05:53
>>lispm+Zt1
Ok, but let's avoid doing the mirror image thing where we make the thread about Wolfram doing that.

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

replies(1): >>lispm+vy1
◧◩◪◨
47. mietek+Kw1[view] [source] [discussion] 2023-09-06 18:09:07
>>mhewet+xn1
I’d be quite interested to see these materials.

What’s your take on AM and EURISKO? Do you think they actually performed as mythologized? Do you think there’s any hope of recovering or reimplementing them?

replies(1): >>mhewet+I02
◧◩◪◨
48. eschat+9x1[view] [source] [discussion] 2023-09-06 18:11:20
>>mhewet+xn1
It’d be amazing to get those papers and letters digitized.
◧◩
49. brundo+Hx1[view] [source] [discussion] 2023-09-06 18:12:57
>>ks2048+Ej1
If I recall, Cyc did exactly that (imported data from WikiData)

Unfortunately there was much more to it than ingesting large volumes of structured entities

50. dekhn+Nx1[view] [source] 2023-09-06 18:13:32
>>andyjo+(OP)
I recommend reading Norvig's thinking about the various cultures.

https://static.googleusercontent.com/media/research.google.c... and https://norvig.com/chomsky.html

In short, Norvig concludes there are several conceptual approaches to ML/AI/Stats/Scientific analysis. One is "top down": teach the system some high level principles that correspond to known general concepts, and the other is "bottom up": determine the structure from the data itself and use that to generate general concepts. He observes that while the former is attractive to many, the latter has continuously produced more and better results with less effort.

I've seen this play out over and over. I've concluded that Norvig is right: empirically based probabilistic models are a cheaper, faster way to answer important engineering and scientific problems, even if they are possibly less satisfying intellectually. Cheap approximations are often far better than hard to find analytic solutions.

replies(3): >>golol+rB1 >>jyscao+cP1 >>jprete+SV3
◧◩◪◨
51. lispm+vy1[view] [source] [discussion] 2023-09-06 18:16:00
>>dang+Zv1
Well, it's a disappointing and shallow read, because the topic of the usefulness of combining Cyc and Alpha would have been interesting.
replies(1): >>dang+1w2
52. ansibl+vz1[view] [source] 2023-09-06 18:19:53
>>andyjo+(OP)
Symbolic knowledge representation and reasoning is a quite interesting field. I think the design choices of projects like wikidata.org and CYC severely limit the application of this though.

For example, on the wikidata help page, they talk about the height of Mount Everest:

https://www.wikidata.org/wiki/Help:About_data#Structuring_da...

    Earth (Q2) (item) → highest point (P610) (property) → Mount Everest (Q513) (value)
and

    Mount Everest (Q513) (item) → instance of (P31) (property) → mountain (Q8502) (value)
So that's all fine, but it misses a lot of context. These facts might be true for the real world, right now, but they won't always be true. Even in the not-so-distant past, the height of Everest was lower, because of tectonic plate movement. And maybe in the future it will go even higher due to tectonics, or maybe it will go lower due to erosion.

Context awareness gets even more important when talking about facts like "the iPhone is the best selling phone", for example. That might be true right now, but it certainly wasn't true back in 2006, before the phone was released.

Context also comes in many forms, which can be necessary for useful reasoning. For example, consider the question: "What would be the highest mountain in the world, if someone blew up the peak of Everest with a bomb?" This question isn't about the real world, right here and right now, it is about a hypothetical world that doesn't exist.

Going a little further afield, you may want to ask a question like "Who is the best captain of the Enterprise?". This might be about the actual US Navy CVN-64 ship named "Enterprise", the planned CVN-80, or the older ship CV-6 Enterprise which fought in WW2. Or maybe a relevant context to the question was "Star Trek", and we're in one of several fictional worlds instead, which would result in a completely different set of facts.

I think some ability to deal with uncertainly (as with Probabilistic Graphical Models) is also necessary to deal with practical applications of this technology. We may be dealing with a mix of "objective facts" (well, let's not get into a discussion about the philosophy of science) and other facts that we may not be so certain about.

It seems to me that successful symbolic reasoning system will be very, very large and complex. I'm not at all sure even how such knowledge should be represented, never mind the issue of trying to capture it all in digital form.

◧◩
53. golol+rB1[view] [source] [discussion] 2023-09-06 18:27:14
>>dekhn+Nx1
this is the same concept as the bitter lesson, am I correct? I don't see a substantial difference yet.
replies(1): >>dekhn+VE1
◧◩
54. golol+zB1[view] [source] [discussion] 2023-09-06 18:28:17
>>HarHar+ym
Imo LLMs are absolutely the CYC dream come true. Common sense rules are learned from the data instead of hand written.
replies(1): >>mycall+3Q2
◧◩◪
55. dekhn+VE1[view] [source] [discussion] 2023-09-06 18:45:16
>>golol+rB1
I hadn't read that before, but yes. Sutton focuses mostly on "large amounts of compute" whereas I think his own employer has demonstrated that it's a combination of large amount of compute, large amounts of data, and really clever probabilistic algorithms, in combination, which really demonstrate the utility of the bitter lesson.

And speaking as a biologist for a moment, that minds are irredeemably complex and attemptng to understand them with linear, first-order rules and logic is unlikely to be fruitful.

◧◩
56. jyscao+cP1[view] [source] [discussion] 2023-09-06 19:25:56
>>dekhn+Nx1
> One is "top down": teach the system some high level principles that correspond to known general concepts, and the other is "bottom up": determine the structure from the data itself and use that to generate general concepts.

This is the same pattern explaining why bottom-up economic systems, i.e. lassaire faire free markets, flawed as they are, work better than top-down systems like central planning.

replies(1): >>astran+By3
◧◩◪◨⬒
57. mhewet+I02[view] [source] [discussion] 2023-09-06 20:13:41
>>mietek+Kw1
One comment from his thesis advisors on AM was that they couldn't tell which part was performed by AM and which part was guided by Lenat. I think that comment holds for both AM and EURISKO. In those days everyone wanted a standalone AI. Now, people realize that cooperative human-AI systems are acceptable and even preferable in many ways.

I'll make sure my profile has an email address. I'm very busy the next few months but keep pinging me to remind me to get these materials online.

◧◩◪
58. specia+h92[view] [source] [discussion] 2023-09-06 20:55:53
>>patrec+yW
What's your take on Aristotelian logic?
◧◩◪◨
59. patrec+Be2[view] [source] [discussion] 2023-09-06 21:22:34
>>mhewet+xn1
Firstly, thanks for posting these reminiscences!

> His great accomplishments were having a multi-decade vision of how to build an AI and actually keeping the vision alive for so long. You have to be charismatic and convincing to do that.

Taking a big shot at doing something great can indeed by praiseworthy even if in retrospect it turns out to have been a dead end. For one thing because the discovery that a promising seeming avenue is in fact non-viable is often also a very important discovery. Nonetheless, I don't think burning 2000 (highly skilled) man-years and untold millions on a multi-decade vision is automatically an accomplishment or praiseworthy. Quite the opposite, in fact, if it's all snake-oil -- you basically killed several lifetimes worth of meaningful contributions to the world. I won't make the claim that Lenat was a snake-oil salesman rather than a legitimate visionary (I lack sufficient familiarity with Lenat's work for sweeping pronouncements).

However, one thing I will say is that I really strongly get the impression that many people here are so caught up with Lenat's charm and smarts and his appeal as some tragic romantic hero in a doomed quest for his white whale (and probably also as a convenient emblem for the final demise of the symbolic AI area) that the actual substance of his work seems to take on a secondary role. That seems a shame, especially if one is still trying to draw conclusions about what the apparent failure of his vision actually signifies.

◧◩◪
60. bionho+Fs2[view] [source] [discussion] 2023-09-06 22:37:43
>>wpietr+9Z
B.F. Skinner wants to know, “What is the difference?”
replies(1): >>wpietr+203
◧◩◪◨
61. earley+cu2[view] [source] [discussion] 2023-09-06 22:47:23
>>cabala+Ns
Had to go digging . . . Kenneth Haase did a Phd (1990) where he builds an exploratory system called Cyrano and does a substantial review/analysis of AM (& Eurisko)

https://dspace.mit.edu/handle/1721.1/14257?show=full

It's quite interesting.

replies(1): >>cabala+bH9
◧◩◪◨⬒
62. dang+1w2[view] [source] [discussion] 2023-09-06 23:01:57
>>lispm+vy1
Wolfram writes good historical articles. One just needs to put on some glasses that filter out the annoyance part of the spectrum.
63. dizzys+fx2[view] [source] 2023-09-06 23:12:43
>>andyjo+(OP)
I worked at Cycorp as a contractor for a few months. Lenat was, without a doubt, the smartest person I ever met in my life. There are levels of scary smart... his intelligence was downright frightening.

I'm sad to see so many call his work a failure. His work was anything but a failure. I think this perception comes down to the fact that the company worked on many things that aren't public, and he do much publicity and self-promotion.

64. mycall+GP2[view] [source] 2023-09-07 02:00:51
>>andyjo+(OP)
I wonder if there is a way to mix ResearchCyc graph with a high feature count vector database so a transformer has a directed acyclic graph instead of requiring gradient descent. I'm not an expert so that might be gibberish.
◧◩◪
65. mycall+3Q2[view] [source] [discussion] 2023-09-07 02:03:27
>>golol+zB1
The main difference is that LLMs are a black box while CYC is an explicit and thoughtful map of reason and relations. I would love to see the two mix.
replies(1): >>HarHar+vO3
◧◩
66. dragon+HR2[view] [source] [discussion] 2023-09-07 02:18:51
>>HarHar+ym
> The CYC project continued for decades with a massive team of people encoding rules according to it's own complex ontology, but ultimately never met it's goals.

Cyc has been a commercial project for a long time and is still alive. The more limited Open and Research distributions have been discontinued, though.

replies(1): >>HarHar+od5
◧◩◪◨
67. wpietr+203[view] [source] [discussion] 2023-09-07 03:35:08
>>bionho+Fs2
From Skinner's perspective, given his lack of interest in internal states, it would be seen in responses that are coherent despite variations in stimuli.

For example, I could have a hundred different people ask you where you're from. Ask you in many different ways with many different setups, affects, hints. For most people, there will be consistencies across the answers that correspond to what we might call "fact", or at least "belief". But the LLMs, being fancy autocomplete, will produce things that are only textually plausible, showing much shallower consistency, and more relationship with their prompts.

And that's just in the question and answer space. It becomes even more obvious when we do things that involve real-world objects, physical behavior, etc.

◧◩◪
68. astran+By3[view] [source] [discussion] 2023-09-07 09:34:21
>>jyscao+cP1
They don't work for a more specific reason than that; a central planning system that was "bottom up" (asking everyone what they want rather than dictating it) couldn't work either, because people aren't capable of expressing their preferences in a way you can calculate.

How much iron does a steel mill need this year? Well, that depends on how many customers they'll get, which depends on what price they sell steel at.

https://en.wikipedia.org/wiki/Economic_calculation_problem

◧◩◪
69. astran+Kz3[view] [source] [discussion] 2023-09-07 09:44:48
>>at_a_r+n71
You can solve that one with ControlNet already.
◧◩◪◨
70. HarHar+vO3[view] [source] [discussion] 2023-09-07 11:51:20
>>mycall+3Q2
There's an interesting (& long!) recent Lex Fridman interview with Doug Lenat here:

https://www.youtube.com/watch?v=3wMKoSRbGVs

Lenat thought CYC and neural nets could be complementary, with neural nets providing right brain/fast thinking capability, and CYC left brain/slow (analytic/reflective) thinking capability.

It's odd to see Lenat discuss CYC the way he does - as if 40 years on everything was still going well despite it having dropped off the public radar twenty years ago.

◧◩
71. jprete+SV3[view] [source] [discussion] 2023-09-07 12:39:14
>>dekhn+Nx1
Empirically it's worked out this way.

It's true that it's less satisfying and less attractive, but these subjective adjectives are based on relevant objective truths, namely that LLMs are difficult or impossible to analyze from the outside, and at a coarse level they're the knowledge equivalent of pathological functions. Calling them "intelligent" is to privilege a very limited definition of the word, while ignoring all of the other things that we normally associate with it.

I don't want us to make an AGI or anything like it for both humanist and economic reasons, but if we make one, I think it's very likely that it has to have more internal structure than do LLMs, even if we do not explicitly force a given structure to be there.

(I am not an expert.)

◧◩◪◨
72. isykt+AI4[view] [source] [discussion] 2023-09-07 16:07:00
>>Taikon+zg1
The user called catpolice worked there. Here’s a comment they made a few years back.

>>21783828

replies(1): >>Taikon+OW8
◧◩◪
73. HarHar+od5[view] [source] [discussion] 2023-09-07 18:00:01
>>dragon+HR2
I'll admit it was news to me that Cycorp is still around! There's an interesting thread here that provides some insight from former/current employees into what they are actually doing nowadays.

>>21781597

There's also a lengthy Lex Fridman interview with Doug Lenat, from just a year ago, here:

https://www.youtube.com/watch?v=3wMKoSRbGVs

It seems as if the "common sense expert system" foundation of CYC (the mostly unstated common knowledge behind all human communication) was basically completed, but what has failed to materialize is any higher level comprehensive knowledge base and reasoning system (i.e some form of AGI) based on top of this.

It's not clear from the outside whether anyone working at Cycorp still really believes there is a CYC-based path to AGI, but regardless it seems not to be something that's really being funded and worked on, and 40 years on probably fair to say it's not going to happen. It seems that Cycorp stays alive by selling the hype and winning contracts to develop domain-specific expert systems, based on the CYC methodology and toolset, that really have little reliance on the "common sense" foundations they are nominally built on top of.

◧◩◪◨⬒
74. Taikon+OW8[view] [source] [discussion] 2023-09-08 18:19:57
>>isykt+AI4
Thank you, this is really interesting.
◧◩◪◨⬒
75. cabala+bH9[view] [source] [discussion] 2023-09-08 22:39:03
>>earley+cu2
It may well be, but when I try to download the pdf it fails.
[go to top]