zlacker

[parent] [thread] 17 comments
1. specia+(OP)[view] [source] 2023-09-06 11:25:45
Just perfect. So glad I read this. Thanks for sharing.

> In many ways the great quest of Doug Lenat’s life was an attempt to follow on directly from the work of Aristotle and Leibniz.

Such a wonderful, respectful retrospective of Lenat's ideas and work.

> I think Doug viewed CYC as some kind of formalized idealization of how he imagined human minds work: providing a framework into which a large collection of (fairly undifferentiated) knowledge about the world could be “poured”. At some level it was a very “pure AI” concept: set up a generic brain-like thing, then “it’ll just do the rest”. But Doug still felt that the thing had to operate according to logic, and that what was fed into it also had to consist of knowledge packaged up in the form of logic.

I've always wanted CYC, or something like it, to be correct. Like somehow it'd fulfill my need for the universe to be knowable, legible. If human reason & logic could be encoded, then maybe things could start to make sense, if only we try hard enough.

Alas.

Back when SemanticWeb was the hotness, I was a firm ontology partisan. After working on customer's use cases, and given enough time to work thru the stages of grief, I grudgingly accepted the folksonomy worldview is probably true.

Since then, of course, the "fuzzy" strategies have prevailed. (Also, most of us have accepted humans aren't rational.)

To this day, statistics based approaches make me uncomfortable, perhaps even anxious. My pragmatism motivated holistic worldview is always running up against my reductionist impulses. Paradox in a nutshell.

Enough about me.

> Doug’s starting points were AI and logic, mine were ... computation writ large.

I do appreciate Wolfram placing their respective theories in the pantheon. It's a nice reminder of their lineages. So great.

I agree with Wolfram that encoding heuristics was an experiment that had to be done. Negative results are super important. I'm so, so glad Lenat (and crews) tried so hard.

And I hope the future holds some kind of synthesis of these strategies.

replies(5): >>zozbot+q1 >>skissa+U3 >>cabala+Ul >>patrec+ML >>at_a_r+BW
2. zozbot+q1[view] [source] 2023-09-06 11:39:28
>>specia+(OP)
> And I hope the future holds some kind of synthesis of these strategies.

My guess is that by June 19, 2024 we'll be able to take 3596.6 megabytes of descriptive text about President Abraham Lincoln and do something cool with it.

replies(1): >>specia+62
◧◩
3. specia+62[view] [source] [discussion] 2023-09-06 11:45:09
>>zozbot+q1
Heh.

I was more hoping OpenAI would incorporate inference engines to cure ChatGPT's "hallucinations". Such that it'd "know" bad sex isn't better than good sex, despite the logic.

PS- I haven't actually asked ChatGPT. I'm just repeating a cliché about the limits of logic wrt the real world.

4. skissa+U3[view] [source] 2023-09-06 12:00:30
>>specia+(OP)
> And I hope the future holds some kind of synthesis of these strategies.

Recently I’ve been involved in discussions about using an LLM to generate JSON according to a schema, as in OpenAI’s function calling or Jsonformer-LLMs do okay for generating code in mainstream languages like SQL or Python, but what if you have some proprietary query language? Maybe have a JSON schema for the AST, have the LLM generate JSON conforming to that schema, then serialise the JSON to the proprietary query language syntax?

And it makes me think - what if one used an LLM to generate or evaluate assertions in a Cyc-style ontology language? And that might be a bridge between the logic/ontology approach and the statistical/neural approach

replies(3): >>jebark+E5 >>nvm0n2+hT >>thepti+K41
◧◩
5. jebark+E5[view] [source] [discussion] 2023-09-06 12:13:21
>>skissa+U3
This is similar to what people are trying for mathematical theorem proving. Using LLMs to generate theorems that can be validated in Lean.
6. cabala+Ul[view] [source] 2023-09-06 13:38:51
>>specia+(OP)
> Negative results are super important.

I agree, and this is often overlooked. Knowing what doesn't work (and why) is a massive help in searching for what does work.

7. patrec+ML[view] [source] 2023-09-06 15:28:19
>>specia+(OP)
> I agree with Wolfram that encoding heuristics was an experiment that had to be done. Negative results are super important. I'm so, so glad Lenat (and crews) tried so hard.

The problem is that Doug Lenat trying very hard is only useful as a data point if you have some faith in Doug Lenat making something that is reasonably workable work by trying very hard.

Do you have a reason for thinking so? I'm genuinely curious: lots of people have positive reminiscences about Lenat, who seems to have been likeable and smart, but on my (admittedly somewhat shallow attempts) I always keep drawing blanks when looking for anything of substance he produced or some deeper insight he had (even before Cyc).

replies(3): >>mhewet+Lc1 >>creer+zk1 >>specia+vY1
◧◩
8. nvm0n2+hT[view] [source] [discussion] 2023-09-06 15:59:23
>>skissa+U3
I had the same idea last year. But it's difficult. To encode knowledge in CycL required intensive training, mostly in how their KB encoded very abstract concepts and "obvious" knowledge. They used to boast about how they had more philosophy PhDs than anywhere else.

It's possible that an LLM that's been trained on enough examples, and that's smart enough, could actually do this. But I'm not sure how you'd review the output to know if it's right. The LLM doesn't have to be much faster than you to overwhelm the capacity of reviewing the results.

9. at_a_r+BW[view] [source] 2023-09-06 16:13:56
>>specia+(OP)
I really do believe (believe, rather than know) that some sort of synthesis is necessary, that there's some base facts and common sense that would make AI, as it stands, more reliable and trustworthy if had some kind of touchstone, rather than the slipshod "human hands come with thumbs and fingers" output we have now. Something that can look back and say, "Typically, there's just one thumb and there's four fingers. Sometimes not, but that is rare."
replies(1): >>astran+Yo3
◧◩
10. thepti+K41[view] [source] [discussion] 2023-09-06 16:52:29
>>skissa+U3
This might work; you can view it as distilling the common knowledge out of the LLM.

You’d need to provide enough examples of CycL for it to learn the syntax.

But in my experience LLMs are not great at authoring code with no ground truth to test against. So the LLM might hallucinate some piece of common knowledge, and it could be hard to detect.

But at the highest level, this sounds exactly how the WolframAlpha ChatGPT plug-in works; the LLM knows how to call the plugin and can use this to generate graphs or compute numerical functions for domains where it cannot compute the result directly.

◧◩
11. mhewet+Lc1[view] [source] [discussion] 2023-09-06 17:30:29
>>patrec+ML
Lenat was my assigned advisor when I started my Masters at Stanford. I met with him once and he gave me some advice on classes. After that he was extremely difficult to schedule a meeting with (for any student, not just me). He didn't get tenure and left to join MCC after that year. I don't think I ever talked to him again after the first meeting.

He was extremely smart, charismatic, and a bit arrogant (but a well-founded arrogance). From other comments it sounds like he was pleasant to young people at Cycorp. I think his peers found him more annoying.

His great accomplishments were having a multi-decade vision of how to build an AI and actually keeping the vision alive for so long. You have to be charismatic and convincing to do that.

In the mid-80s I took his thesis and tried to implement AM on a more modern framework, but the thesis lacked so many details about how it worked that I was unable to even get started implementing anything.

BTW, if there are any historians out there I have a copy of Lenat's thesis with some extra pages including emailed messages from his thesis advisors (Minsky, McCarthy, et al) commenting on his work. I also have a number of AI papers from the early 1980s that might not be generally available.

replies(3): >>mietek+Yl1 >>eschat+nm1 >>patrec+P32
◧◩
12. creer+zk1[view] [source] [discussion] 2023-09-06 18:03:26
>>patrec+ML
I also feel it's great and useful that Lenat and crew tried so hard. There is no doubt that a ton of work went into cyc. It was a serious, well funded, long term project and competent people put effort in making it work. And there are some descriptions of how they went about it. And opencyc was released.

But some projects - or at least the breakthroughs they produce - are highly published as papers, which can be studied by outsiders. And that is not the case of cyc. There are some reports and papers but really not many that I have found. And so it's not clear how solid or generalizable it is as a data point.

◧◩◪
13. mietek+Yl1[view] [source] [discussion] 2023-09-06 18:09:07
>>mhewet+Lc1
I’d be quite interested to see these materials.

What’s your take on AM and EURISKO? Do you think they actually performed as mythologized? Do you think there’s any hope of recovering or reimplementing them?

replies(1): >>mhewet+WP1
◧◩◪
14. eschat+nm1[view] [source] [discussion] 2023-09-06 18:11:20
>>mhewet+Lc1
It’d be amazing to get those papers and letters digitized.
◧◩◪◨
15. mhewet+WP1[view] [source] [discussion] 2023-09-06 20:13:41
>>mietek+Yl1
One comment from his thesis advisors on AM was that they couldn't tell which part was performed by AM and which part was guided by Lenat. I think that comment holds for both AM and EURISKO. In those days everyone wanted a standalone AI. Now, people realize that cooperative human-AI systems are acceptable and even preferable in many ways.

I'll make sure my profile has an email address. I'm very busy the next few months but keep pinging me to remind me to get these materials online.

◧◩
16. specia+vY1[view] [source] [discussion] 2023-09-06 20:55:53
>>patrec+ML
What's your take on Aristotelian logic?
◧◩◪
17. patrec+P32[view] [source] [discussion] 2023-09-06 21:22:34
>>mhewet+Lc1
Firstly, thanks for posting these reminiscences!

> His great accomplishments were having a multi-decade vision of how to build an AI and actually keeping the vision alive for so long. You have to be charismatic and convincing to do that.

Taking a big shot at doing something great can indeed by praiseworthy even if in retrospect it turns out to have been a dead end. For one thing because the discovery that a promising seeming avenue is in fact non-viable is often also a very important discovery. Nonetheless, I don't think burning 2000 (highly skilled) man-years and untold millions on a multi-decade vision is automatically an accomplishment or praiseworthy. Quite the opposite, in fact, if it's all snake-oil -- you basically killed several lifetimes worth of meaningful contributions to the world. I won't make the claim that Lenat was a snake-oil salesman rather than a legitimate visionary (I lack sufficient familiarity with Lenat's work for sweeping pronouncements).

However, one thing I will say is that I really strongly get the impression that many people here are so caught up with Lenat's charm and smarts and his appeal as some tragic romantic hero in a doomed quest for his white whale (and probably also as a convenient emblem for the final demise of the symbolic AI area) that the actual substance of his work seems to take on a secondary role. That seems a shame, especially if one is still trying to draw conclusions about what the apparent failure of his vision actually signifies.

◧◩
18. astran+Yo3[view] [source] [discussion] 2023-09-07 09:44:48
>>at_a_r+BW
You can solve that one with ControlNet already.
[go to top]