> In many ways the great quest of Doug Lenat’s life was an attempt to follow on directly from the work of Aristotle and Leibniz.
Such a wonderful, respectful retrospective of Lenat's ideas and work.
> I think Doug viewed CYC as some kind of formalized idealization of how he imagined human minds work: providing a framework into which a large collection of (fairly undifferentiated) knowledge about the world could be “poured”. At some level it was a very “pure AI” concept: set up a generic brain-like thing, then “it’ll just do the rest”. But Doug still felt that the thing had to operate according to logic, and that what was fed into it also had to consist of knowledge packaged up in the form of logic.
I've always wanted CYC, or something like it, to be correct. Like somehow it'd fulfill my need for the universe to be knowable, legible. If human reason & logic could be encoded, then maybe things could start to make sense, if only we try hard enough.
Alas.
Back when SemanticWeb was the hotness, I was a firm ontology partisan. After working on customer's use cases, and given enough time to work thru the stages of grief, I grudgingly accepted the folksonomy worldview is probably true.
Since then, of course, the "fuzzy" strategies have prevailed. (Also, most of us have accepted humans aren't rational.)
To this day, statistics based approaches make me uncomfortable, perhaps even anxious. My pragmatism motivated holistic worldview is always running up against my reductionist impulses. Paradox in a nutshell.
Enough about me.
> Doug’s starting points were AI and logic, mine were ... computation writ large.
I do appreciate Wolfram placing their respective theories in the pantheon. It's a nice reminder of their lineages. So great.
I agree with Wolfram that encoding heuristics was an experiment that had to be done. Negative results are super important. I'm so, so glad Lenat (and crews) tried so hard.
And I hope the future holds some kind of synthesis of these strategies.
The problem is that Doug Lenat trying very hard is only useful as a data point if you have some faith in Doug Lenat making something that is reasonably workable work by trying very hard.
Do you have a reason for thinking so? I'm genuinely curious: lots of people have positive reminiscences about Lenat, who seems to have been likeable and smart, but on my (admittedly somewhat shallow attempts) I always keep drawing blanks when looking for anything of substance he produced or some deeper insight he had (even before Cyc).
He was extremely smart, charismatic, and a bit arrogant (but a well-founded arrogance). From other comments it sounds like he was pleasant to young people at Cycorp. I think his peers found him more annoying.
His great accomplishments were having a multi-decade vision of how to build an AI and actually keeping the vision alive for so long. You have to be charismatic and convincing to do that.
In the mid-80s I took his thesis and tried to implement AM on a more modern framework, but the thesis lacked so many details about how it worked that I was unable to even get started implementing anything.
BTW, if there are any historians out there I have a copy of Lenat's thesis with some extra pages including emailed messages from his thesis advisors (Minsky, McCarthy, et al) commenting on his work. I also have a number of AI papers from the early 1980s that might not be generally available.
> His great accomplishments were having a multi-decade vision of how to build an AI and actually keeping the vision alive for so long. You have to be charismatic and convincing to do that.
Taking a big shot at doing something great can indeed by praiseworthy even if in retrospect it turns out to have been a dead end. For one thing because the discovery that a promising seeming avenue is in fact non-viable is often also a very important discovery. Nonetheless, I don't think burning 2000 (highly skilled) man-years and untold millions on a multi-decade vision is automatically an accomplishment or praiseworthy. Quite the opposite, in fact, if it's all snake-oil -- you basically killed several lifetimes worth of meaningful contributions to the world. I won't make the claim that Lenat was a snake-oil salesman rather than a legitimate visionary (I lack sufficient familiarity with Lenat's work for sweeping pronouncements).
However, one thing I will say is that I really strongly get the impression that many people here are so caught up with Lenat's charm and smarts and his appeal as some tragic romantic hero in a doomed quest for his white whale (and probably also as a convenient emblem for the final demise of the symbolic AI area) that the actual substance of his work seems to take on a secondary role. That seems a shame, especially if one is still trying to draw conclusions about what the apparent failure of his vision actually signifies.