zlacker

[return to "Remembering Doug Lenat and his quest to capture the world with logic"]
1. HarHar+ym[view] [source] 2023-09-06 12:46:50
>>andyjo+(OP)
I missed the news of Doug Lenat's passing. He died a few days ago on August 31st.

I'm old enough to have lived thru the hope but ultimate failure of Lenat's baby CYC. The CYC project was initiated in 1984, in the heyday of expert systems which had been successful in many domains. The idea of an expert system was to capture the knowledge and reasoning power of a subject matter expert in a system of declarative logic and rules.

CYC was going to be the ultimate expert system that captured human common sense knowledge about the world via a MASSIVE knowledge/rule set (initially estimated as a 1000 man-year project) of how everyday objects behaved. The hope was that through sheer scale and completeness it would be able to reason about the world in the same way as a human who had gained the same knowledge thru embodiment and interaction.

The CYC project continued for decades with a massive team of people encoding rules according to it's own complex ontology, but ultimately never met it's goals. In retrospect it seems the idea was doomed to failure from the beginning, but nonetheless it was an important project that needed to be tried. The problem with any expert system reasoning over a fixed knowledge set is that it's always going to be "brittle" - it may perform well for cases wholly within what it knows about, but then fail when asked to reason about things where common sense knowledge and associated extrapolation of behavior is required; CYC was hoping to avoid this via scale to be so complete that there were no important knowledge gaps.

I have to wonder if LLM-based "AI's" like GPT-4 aren't in some ways very similar to CYC in that they are ultimately also giant expert systems, but with the twist that they learnt their knowledge, rules and representations/reasoning mechanisms from a training set rather than it having to be laboriously hand entered. The end result is much he same though - an ultimately brittle system who's Achille's heel is that it is based on a fixed set of knowledge rather than being able to learn from it's own mistakes and interact with the domain it is attempting to gain knowledge over. It seems there's a similar hope to CYC of scaling these LLM's up to the point that they know everything and the brittleness disappears, but I suspect that ultimately that will prove a false hope and real AI's will need to learn through experimentation just as we do.

RIP Doug Lenat. A pioneer of the computer age and of artificial intelligence.

◧◩
2. brundo+qa1[view] [source] 2023-09-06 16:27:26
>>HarHar+ym
> The CYC project continued for decades with a massive team of people encoding rules according to it's own complex ontology, but ultimately never met it's goals

It's still going! I agree it's become clear that it probably isn't the road to AGI, but it still employs people who are still encoding rules and making the inference engine faster, paying the bills mostly by doing contracts from companies that want someone to make sense of their data warehouses

◧◩◪
3. Taikon+zg1[view] [source] 2023-09-06 16:57:06
>>brundo+qa1
It is? Are there success stories of companies using Cyc?

I always had the impression that Cycorp was sustained by government funding (especially military) -- and that, frankly, it was always premised more on what such software could theoretically do, rather than what it actually did.

[go to top]