The problem is that Doug Lenat trying very hard is only useful as a data point if you have some faith in Doug Lenat making something that is reasonably workable work by trying very hard.
Do you have a reason for thinking so? I'm genuinely curious: lots of people have positive reminiscences about Lenat, who seems to have been likeable and smart, but on my (admittedly somewhat shallow attempts) I always keep drawing blanks when looking for anything of substance he produced or some deeper insight he had (even before Cyc).
He was extremely smart, charismatic, and a bit arrogant (but a well-founded arrogance). From other comments it sounds like he was pleasant to young people at Cycorp. I think his peers found him more annoying.
His great accomplishments were having a multi-decade vision of how to build an AI and actually keeping the vision alive for so long. You have to be charismatic and convincing to do that.
In the mid-80s I took his thesis and tried to implement AM on a more modern framework, but the thesis lacked so many details about how it worked that I was unable to even get started implementing anything.
BTW, if there are any historians out there I have a copy of Lenat's thesis with some extra pages including emailed messages from his thesis advisors (Minsky, McCarthy, et al) commenting on his work. I also have a number of AI papers from the early 1980s that might not be generally available.
But some projects - or at least the breakthroughs they produce - are highly published as papers, which can be studied by outsiders. And that is not the case of cyc. There are some reports and papers but really not many that I have found. And so it's not clear how solid or generalizable it is as a data point.
What’s your take on AM and EURISKO? Do you think they actually performed as mythologized? Do you think there’s any hope of recovering or reimplementing them?
I'll make sure my profile has an email address. I'm very busy the next few months but keep pinging me to remind me to get these materials online.
> His great accomplishments were having a multi-decade vision of how to build an AI and actually keeping the vision alive for so long. You have to be charismatic and convincing to do that.
Taking a big shot at doing something great can indeed by praiseworthy even if in retrospect it turns out to have been a dead end. For one thing because the discovery that a promising seeming avenue is in fact non-viable is often also a very important discovery. Nonetheless, I don't think burning 2000 (highly skilled) man-years and untold millions on a multi-decade vision is automatically an accomplishment or praiseworthy. Quite the opposite, in fact, if it's all snake-oil -- you basically killed several lifetimes worth of meaningful contributions to the world. I won't make the claim that Lenat was a snake-oil salesman rather than a legitimate visionary (I lack sufficient familiarity with Lenat's work for sweeping pronouncements).
However, one thing I will say is that I really strongly get the impression that many people here are so caught up with Lenat's charm and smarts and his appeal as some tragic romantic hero in a doomed quest for his white whale (and probably also as a convenient emblem for the final demise of the symbolic AI area) that the actual substance of his work seems to take on a secondary role. That seems a shame, especially if one is still trying to draw conclusions about what the apparent failure of his vision actually signifies.