zlacker

[parent] [thread] 7 comments
1. patrec+(OP)[view] [source] 2023-09-06 15:28:19
> I agree with Wolfram that encoding heuristics was an experiment that had to be done. Negative results are super important. I'm so, so glad Lenat (and crews) tried so hard.

The problem is that Doug Lenat trying very hard is only useful as a data point if you have some faith in Doug Lenat making something that is reasonably workable work by trying very hard.

Do you have a reason for thinking so? I'm genuinely curious: lots of people have positive reminiscences about Lenat, who seems to have been likeable and smart, but on my (admittedly somewhat shallow attempts) I always keep drawing blanks when looking for anything of substance he produced or some deeper insight he had (even before Cyc).

replies(3): >>mhewet+Zq >>creer+Ny >>specia+Jc1
2. mhewet+Zq[view] [source] 2023-09-06 17:30:29
>>patrec+(OP)
Lenat was my assigned advisor when I started my Masters at Stanford. I met with him once and he gave me some advice on classes. After that he was extremely difficult to schedule a meeting with (for any student, not just me). He didn't get tenure and left to join MCC after that year. I don't think I ever talked to him again after the first meeting.

He was extremely smart, charismatic, and a bit arrogant (but a well-founded arrogance). From other comments it sounds like he was pleasant to young people at Cycorp. I think his peers found him more annoying.

His great accomplishments were having a multi-decade vision of how to build an AI and actually keeping the vision alive for so long. You have to be charismatic and convincing to do that.

In the mid-80s I took his thesis and tried to implement AM on a more modern framework, but the thesis lacked so many details about how it worked that I was unable to even get started implementing anything.

BTW, if there are any historians out there I have a copy of Lenat's thesis with some extra pages including emailed messages from his thesis advisors (Minsky, McCarthy, et al) commenting on his work. I also have a number of AI papers from the early 1980s that might not be generally available.

replies(3): >>mietek+cA >>eschat+BA >>patrec+3i1
3. creer+Ny[view] [source] 2023-09-06 18:03:26
>>patrec+(OP)
I also feel it's great and useful that Lenat and crew tried so hard. There is no doubt that a ton of work went into cyc. It was a serious, well funded, long term project and competent people put effort in making it work. And there are some descriptions of how they went about it. And opencyc was released.

But some projects - or at least the breakthroughs they produce - are highly published as papers, which can be studied by outsiders. And that is not the case of cyc. There are some reports and papers but really not many that I have found. And so it's not clear how solid or generalizable it is as a data point.

◧◩
4. mietek+cA[view] [source] [discussion] 2023-09-06 18:09:07
>>mhewet+Zq
I’d be quite interested to see these materials.

What’s your take on AM and EURISKO? Do you think they actually performed as mythologized? Do you think there’s any hope of recovering or reimplementing them?

replies(1): >>mhewet+a41
◧◩
5. eschat+BA[view] [source] [discussion] 2023-09-06 18:11:20
>>mhewet+Zq
It’d be amazing to get those papers and letters digitized.
◧◩◪
6. mhewet+a41[view] [source] [discussion] 2023-09-06 20:13:41
>>mietek+cA
One comment from his thesis advisors on AM was that they couldn't tell which part was performed by AM and which part was guided by Lenat. I think that comment holds for both AM and EURISKO. In those days everyone wanted a standalone AI. Now, people realize that cooperative human-AI systems are acceptable and even preferable in many ways.

I'll make sure my profile has an email address. I'm very busy the next few months but keep pinging me to remind me to get these materials online.

7. specia+Jc1[view] [source] 2023-09-06 20:55:53
>>patrec+(OP)
What's your take on Aristotelian logic?
◧◩
8. patrec+3i1[view] [source] [discussion] 2023-09-06 21:22:34
>>mhewet+Zq
Firstly, thanks for posting these reminiscences!

> His great accomplishments were having a multi-decade vision of how to build an AI and actually keeping the vision alive for so long. You have to be charismatic and convincing to do that.

Taking a big shot at doing something great can indeed by praiseworthy even if in retrospect it turns out to have been a dead end. For one thing because the discovery that a promising seeming avenue is in fact non-viable is often also a very important discovery. Nonetheless, I don't think burning 2000 (highly skilled) man-years and untold millions on a multi-decade vision is automatically an accomplishment or praiseworthy. Quite the opposite, in fact, if it's all snake-oil -- you basically killed several lifetimes worth of meaningful contributions to the world. I won't make the claim that Lenat was a snake-oil salesman rather than a legitimate visionary (I lack sufficient familiarity with Lenat's work for sweeping pronouncements).

However, one thing I will say is that I really strongly get the impression that many people here are so caught up with Lenat's charm and smarts and his appeal as some tragic romantic hero in a doomed quest for his white whale (and probably also as a convenient emblem for the final demise of the symbolic AI area) that the actual substance of his work seems to take on a secondary role. That seems a shame, especially if one is still trying to draw conclusions about what the apparent failure of his vision actually signifies.

[go to top]