zlacker

[parent] [thread] 5 comments
1. adastr+(OP)[view] [source] 2023-09-01 23:08:54
It seems the direction of flow would be the opposite: LLMs are a great source of logical data for Cyc-like things. Distill your LLM into logical statements, then run your Cyc algorithms on it.
replies(2): >>xpe+8d >>creer+Kd2
2. xpe+8d[view] [source] 2023-09-02 01:48:25
>>adastr+(OP)
> It seems the direction of flow would be the opposite: LLMs are a great source of logical data for Cyc-like things. Distill your LLM into logical statements, then run your Cyc algorithms on it.

This is hugely problematic. If you get the premises wrong, many fallacies will follow.

LLMs can play many roles around this area, but their output cannot be trusted with significant verification and validation.

replies(1): >>xpe+1o1
◧◩
3. xpe+1o1[view] [source] [discussion] 2023-09-02 15:35:27
>>xpe+8d
*without
4. creer+Kd2[view] [source] 2023-09-02 21:24:22
>>adastr+(OP)
LLM statements (distilled into logical statements) would not be logically sound. That's (one of) the main issues of LLMs. And that would make logical inference on these logical statements impossible with current systems.

That's one of the principal features of Cyc. It's carefully built by humans to be (essentially) logically sound. - so that inference can then be run through the fact base. Making that stuff logically sound made for a very detailed and fussy knowledge base. And that in turn made it difficult to expand or even understand for mere civilians. Cyc is NOT simple.

replies(1): >>varjag+zg2
◧◩
5. varjag+zg2[view] [source] [discussion] 2023-09-02 21:48:32
>>creer+Kd2
Cyc is built to be locally consistent but global KB consistency is an impossible task. Lenat stressed that in his videos over and over.
replies(1): >>creer+kX2
◧◩◪
6. creer+kX2[view] [source] [discussion] 2023-09-03 07:39:06
>>varjag+zg2
My "essentially" was doing some work there. It's been years but I remember something like "within a context" as the general direction? Such as within an area of the ontology (because - by contrast to LLMs - there is one) or within a reasonning problem, that kind of thing.

By contrast, LLMs for now are embarassing. With inconsistent nonsense provided within one answer or an answer not recognizing the context of the problem. Say, the work domain being a food label and the system not recognizing that or not staying within that.

[go to top]