zlacker

[return to "Remembering Doug Lenat and his quest to capture the world with logic"]
1. HarHar+ym[view] [source] 2023-09-06 12:46:50
>>andyjo+(OP)
I missed the news of Doug Lenat's passing. He died a few days ago on August 31st.

I'm old enough to have lived thru the hope but ultimate failure of Lenat's baby CYC. The CYC project was initiated in 1984, in the heyday of expert systems which had been successful in many domains. The idea of an expert system was to capture the knowledge and reasoning power of a subject matter expert in a system of declarative logic and rules.

CYC was going to be the ultimate expert system that captured human common sense knowledge about the world via a MASSIVE knowledge/rule set (initially estimated as a 1000 man-year project) of how everyday objects behaved. The hope was that through sheer scale and completeness it would be able to reason about the world in the same way as a human who had gained the same knowledge thru embodiment and interaction.

The CYC project continued for decades with a massive team of people encoding rules according to it's own complex ontology, but ultimately never met it's goals. In retrospect it seems the idea was doomed to failure from the beginning, but nonetheless it was an important project that needed to be tried. The problem with any expert system reasoning over a fixed knowledge set is that it's always going to be "brittle" - it may perform well for cases wholly within what it knows about, but then fail when asked to reason about things where common sense knowledge and associated extrapolation of behavior is required; CYC was hoping to avoid this via scale to be so complete that there were no important knowledge gaps.

I have to wonder if LLM-based "AI's" like GPT-4 aren't in some ways very similar to CYC in that they are ultimately also giant expert systems, but with the twist that they learnt their knowledge, rules and representations/reasoning mechanisms from a training set rather than it having to be laboriously hand entered. The end result is much he same though - an ultimately brittle system who's Achille's heel is that it is based on a fixed set of knowledge rather than being able to learn from it's own mistakes and interact with the domain it is attempting to gain knowledge over. It seems there's a similar hope to CYC of scaling these LLM's up to the point that they know everything and the brittleness disappears, but I suspect that ultimately that will prove a false hope and real AI's will need to learn through experimentation just as we do.

RIP Doug Lenat. A pioneer of the computer age and of artificial intelligence.

◧◩
2. detour+Gx[view] [source] 2023-09-06 13:42:50
>>HarHar+ym
I understand what you are saying. I'm able to see that brittleness as feature. The brittleness must be expressed so that the user of the model understands the limits and why the brittleness exists.

My thinking is that the next generation of computing will rely on the human bridging that brittleness gap.

◧◩◪
3. zozbot+hA[view] [source] 2023-09-06 13:55:00
>>detour+Gx
The thing about "expert systems" is that they're just glorified database query. (And yes, you can do also 'semantic' inference in a DB simply by adding some views. It's not generally done because it's quite computationally expensive even for very simple taxonomy structures, i.e. 'A implies B which implies C and foo is A, hence foo is C'.)

Database query is of course ubiquitous, but not generally thought of as 'AI'.

[go to top]