zlacker

[return to "Doug Lenat has died"]
1. symbol+nx[view] [source] 2023-09-01 20:53:57
>>snewma+(OP)
Doug Lenat, RIP. I worked at Cycorp in Austin from 2000-2006. Taken from us way too soon, Doug none the less had the opportunity to help our country advance military and intelligence community computer science research.

One day, the rapid advancement of AI via LLMs will slow down and attention will again return to logical reasoning and knowledge representation as championed by the Cyc Project, Cycorp, its cyclists and Dr. Doug Lenat.

Why? If NN inference were so fast, we would compile C programs with it instead of using deductive logical inference that is executed efficiently by the compiler.

◧◩
2. optima+SB[view] [source] 2023-09-01 21:26:35
>>symbol+nx
The best thing Cycorp could do now is open source its accumulated database of logical relations so it can be ingested by some monster LLM.

What's the point of all that data collecting dust and accomplishing not much of anything?

◧◩◪
3. adastr+bO[view] [source] 2023-09-01 23:08:54
>>optima+SB
It seems the direction of flow would be the opposite: LLMs are a great source of logical data for Cyc-like things. Distill your LLM into logical statements, then run your Cyc algorithms on it.
◧◩◪◨
4. creer+V13[view] [source] 2023-09-02 21:24:22
>>adastr+bO
LLM statements (distilled into logical statements) would not be logically sound. That's (one of) the main issues of LLMs. And that would make logical inference on these logical statements impossible with current systems.

That's one of the principal features of Cyc. It's carefully built by humans to be (essentially) logically sound. - so that inference can then be run through the fact base. Making that stuff logically sound made for a very detailed and fussy knowledge base. And that in turn made it difficult to expand or even understand for mere civilians. Cyc is NOT simple.

◧◩◪◨⬒
5. varjag+K43[view] [source] 2023-09-02 21:48:32
>>creer+V13
Cyc is built to be locally consistent but global KB consistency is an impossible task. Lenat stressed that in his videos over and over.
◧◩◪◨⬒⬓
6. creer+vL3[view] [source] 2023-09-03 07:39:06
>>varjag+K43
My "essentially" was doing some work there. It's been years but I remember something like "within a context" as the general direction? Such as within an area of the ontology (because - by contrast to LLMs - there is one) or within a reasonning problem, that kind of thing.

By contrast, LLMs for now are embarassing. With inconsistent nonsense provided within one answer or an answer not recognizing the context of the problem. Say, the work domain being a food label and the system not recognizing that or not staying within that.

[go to top]