zlacker

[parent] [thread] 24 comments
1. symbol+(OP)[view] [source] 2023-09-01 20:53:57
Doug Lenat, RIP. I worked at Cycorp in Austin from 2000-2006. Taken from us way too soon, Doug none the less had the opportunity to help our country advance military and intelligence community computer science research.

One day, the rapid advancement of AI via LLMs will slow down and attention will again return to logical reasoning and knowledge representation as championed by the Cyc Project, Cycorp, its cyclists and Dr. Doug Lenat.

Why? If NN inference were so fast, we would compile C programs with it instead of using deductive logical inference that is executed efficiently by the compiler.

replies(3): >>halfli+l2 >>optima+v4 >>nextos+ec
2. halfli+l2[view] [source] 2023-09-01 21:11:06
>>symbol+(OP)
> If NN inference were so fast, we would compile C programs with it instead of using deductive logical inference that is executed efficiently by the compiler.

This is the definition of a strawman. Who is claiming that NN inference is always the fastest way to run computation?

Instead of trying to bring down another technology (neural networks), how about you focus on making symbolic methods usable to solve real-world problems; e.g. how can I build a robust email spam detection system with symbolic methods?

replies(4): >>symbol+36 >>detour+pa >>xpe+nu >>xpe+ov
3. optima+v4[view] [source] 2023-09-01 21:26:35
>>symbol+(OP)
The best thing Cycorp could do now is open source its accumulated database of logical relations so it can be ingested by some monster LLM.

What's the point of all that data collecting dust and accomplishing not much of anything?

replies(4): >>vtr132+eg >>adastr+Og >>xpe+yB >>zozbot+eL
◧◩
4. symbol+36[view] [source] [discussion] 2023-09-01 21:38:16
>>halfli+l2
The point is that symbolic computation as performed by Cycorp was held back by the need to train the Knowledge Base by hand in a supervised manner. NNs and LLMs in particular became ascendant when unsupervised training was employed at scale.

Perhaps LLMs can automate in large part the manual operations of building a future symbolic knowledge base organized by a universal upper ontology. Considering the amazing emergent features of sufficiently-large LLMs, what could emerge from a sufficiently large, reflective symbolic knowledge base?

◧◩
5. detour+pa[view] [source] [discussion] 2023-09-01 22:15:08
>>halfli+l2
That what I have settled on. The need for a symbolic library of standard hardware circuits.

I’m making a sloppy version that will contain all the symbols needed to run a multi-unit building.

6. nextos+ec[view] [source] 2023-09-01 22:32:24
>>symbol+(OP)
Exactly. When I hear books such as Paradigms of AI Programming are outdated because of LLMs, I disagree. They are more current than ever, thanks to LLMs!

Neural and symbolic AI will eventually merge. Symbolic models bring much needed efficiency and robustness via regularization.

replies(2): >>mnemon+Ng >>keepam+Jz
◧◩
7. vtr132+eg[view] [source] [discussion] 2023-09-01 23:05:20
>>optima+v4
I think military will take over his work.Snowden documents reveled the cyc was been used to come up with Terror attack scenarios.
◧◩
8. mnemon+Ng[view] [source] [discussion] 2023-09-01 23:08:36
>>nextos+ec
If you want to learn about symbolic AI, there are a lot of more recent sources than PAIP (you could try the first half of AI: A Modern Approach by Russel and Norvig), and this has been true for a while.

If you read PAIP today, the most likely reason is that you want a master class in Lisp programming and/or want to learn a lot of tricks for getting good performance out of complex programs (which used to be part of AI and is in many ways being outsourced to hardware today).

None of this is to say you shouldn't read PAIP. You absolutely should. It's awesome. But its role is different now.

replies(1): >>nextos+en
◧◩
9. adastr+Og[view] [source] [discussion] 2023-09-01 23:08:54
>>optima+v4
It seems the direction of flow would be the opposite: LLMs are a great source of logical data for Cyc-like things. Distill your LLM into logical statements, then run your Cyc algorithms on it.
replies(2): >>xpe+Wt >>creer+yu2
◧◩◪
10. nextos+en[view] [source] [discussion] 2023-09-02 00:24:28
>>mnemon+Ng
Some parts of PAIP might be outdated, but it still has really current material on e.g. embedding Prolog in Lisp or building a term-rewriting system. That's relevant for pursuing current neuro-symbolic research, e.g. https://arxiv.org/pdf/2006.08381.pdf.

Other parts like coding an Eliza chatbot are indeed outdated. I have read AIMA and followed a long course that used it, but I didn't really like it. I found it too broad and shallow.

◧◩◪
11. xpe+Wt[view] [source] [discussion] 2023-09-02 01:48:25
>>adastr+Og
> It seems the direction of flow would be the opposite: LLMs are a great source of logical data for Cyc-like things. Distill your LLM into logical statements, then run your Cyc algorithms on it.

This is hugely problematic. If you get the premises wrong, many fallacies will follow.

LLMs can play many roles around this area, but their output cannot be trusted with significant verification and validation.

replies(1): >>xpe+PE1
◧◩
12. xpe+nu[view] [source] [discussion] 2023-09-02 01:53:47
>>halfli+l2
>> If NN inference were so fast, we would compile C programs with it instead of using deductive logical inference that is executed efficiently by the compiler.

> This is the definition of a strawman.

(Actually, it is an example of a strawman.) Anyhow, rather than a strawman, I'd rather us get right into the fundamentals.

1. Feed-forward NN computation ('inference', which is an unfortunate word choice IMO) can provably provide universal function approximation under known conditions. And it can do so efficiently as well, with a lot of recent research getting into both the how and why. One "pays the cost" up-front with training in order to get fast prediction-time performance. The tradeoff is often worth it.

2. Function approximation is not as powerful as Turing completeness. FF NNs are not Turing complete.

3. Deductive chaining is a well-studied, well understood area of algorithms.

4. But... modeling of computational architectures (including processors, caches, busses, and RAM) with sufficient detail to optimize compilation is a hard problem. I wouldn't be surprised if this stretches these algorithms to the limit in terms of what developers will tolerate in terms of compile times. This is a strong incentive, so I'd expect there is at least some research that pushes outside the usual contours here.

◧◩
13. xpe+ov[view] [source] [discussion] 2023-09-02 02:07:37
>>halfli+l2
> Instead of trying to bring down another technology (neural networks), how about you focus on making symbolic methods usable to solve real-world problems; e.g. how can I build a robust email spam detection system with symbolic methods?

I have two concerns. First, just after pointing out a logical fallacy from someone else, you added a fallacy: the either-or fallacy. (One can criticize a technology and do other things too.)

Second, you selected an example that illustrates a known and predictable weakness of symbolic systems. Still, there are plenty of real-world problems that symbolic systems address well. So your comment cherry-picks.

It appears as if you are trying to land a counter punch here. I'm weary of this kind of conversational pattern. Many of us know that tends to escalate. I don't want HN to go that direction. We all have varying experience and points of view to contribute. Let's try to be charitable, clear, and logical.

replies(1): >>Neverm+Ux
◧◩◪
14. Neverm+Ux[view] [source] [discussion] 2023-09-02 02:45:44
>>xpe+ov
I am desperately vetting your comment for something I can criticize. An inadvertent, irrelevant, imagined infraction. Anything! But you have left me no opening.

Well done, sir, well done.

replies(1): >>xpe+Yy
◧◩◪◨
15. xpe+Yy[view] [source] [discussion] 2023-09-02 03:01:33
>>Neverm+Ux
Thanks, but if I didn't blunder here, I can assure you I have in many other places. I strive to be mindful. I try not to "blame" anyone for strong reactions. But when we see certain unhelpful behaviors directed at other people, I try to identify/name it without making it worse. Awareness helps.
replies(1): >>Neverm+IH
◧◩
16. keepam+Jz[view] [source] [discussion] 2023-09-02 03:11:15
>>nextos+ec
It would be cool if we could find the algorithmic neurological basis for this, the analogy with LLMs being more obvious multi-layer brain circuits, the neurological analogy with symbolic reasoning must exist too.

My hunch is it emerges naturally out of the hierarchical generalization capabilities of multiple layer circuits. But then you need something to coordinate the acquired labels: a tweak on attention perhaps?

Another characteristic is probably some (limited) form of recursion, so the generalized labels emitted at the small end can be fed back in as tokens to be further processed at the big end.

◧◩
17. xpe+yB[view] [source] [discussion] 2023-09-02 03:41:14
>>optima+v4
> The best thing Cycorp could do now is open source its accumulated database of logical relations...

This is unpersuasive without laying out your assumptions and reasoning.

Counter points:

(a) It would be unethical for such a knowledge base to be put out in the open without considerable guardrails and appropriate licensing. The details matter.

(b) Cycorp gets some funding from the U.S. Government; this changes both the set of options available and the calculus of weighing them.

(c) Not all nations have equivalent values. Unless one is a moral relativist, these differences should not be deemed equivalent nor irrelevant. As such, despite the flaws of U.S. values and some horrific decision-making throughout history, there are known worse actors and states. Such parties would make worse use of an extensive human-curated knowledge base.

replies(1): >>skylyz+ZZ1
◧◩◪◨⬒
18. Neverm+IH[view] [source] [discussion] 2023-09-02 05:30:40
>>xpe+Yy
Without awareness we are just untagged data in a sea of uncompressed noise.
◧◩
19. zozbot+eL[view] [source] [discussion] 2023-09-02 06:21:03
>>optima+v4
OpenCyc is already a thing and there's been very little interest in it. These days we also have general-purpose semantic KB's like Wikidata, that are available for free and go way beyond what Cyc or OpenCyc was trying to do.
◧◩◪◨
20. xpe+PE1[view] [source] [discussion] 2023-09-02 15:35:27
>>xpe+Wt
*without
◧◩◪
21. skylyz+ZZ1[view] [source] [discussion] 2023-09-02 17:46:45
>>xpe+yB
An older version of the database is already available for download, but that's not the approach you want for common sense anyway, no one needs to remember that a "dog is not a cat".
replies(1): >>xpe+Jq6
◧◩◪
22. creer+yu2[view] [source] [discussion] 2023-09-02 21:24:22
>>adastr+Og
LLM statements (distilled into logical statements) would not be logically sound. That's (one of) the main issues of LLMs. And that would make logical inference on these logical statements impossible with current systems.

That's one of the principal features of Cyc. It's carefully built by humans to be (essentially) logically sound. - so that inference can then be run through the fact base. Making that stuff logically sound made for a very detailed and fussy knowledge base. And that in turn made it difficult to expand or even understand for mere civilians. Cyc is NOT simple.

replies(1): >>varjag+nx2
◧◩◪◨
23. varjag+nx2[view] [source] [discussion] 2023-09-02 21:48:32
>>creer+yu2
Cyc is built to be locally consistent but global KB consistency is an impossible task. Lenat stressed that in his videos over and over.
replies(1): >>creer+8e3
◧◩◪◨⬒
24. creer+8e3[view] [source] [discussion] 2023-09-03 07:39:06
>>varjag+nx2
My "essentially" was doing some work there. It's been years but I remember something like "within a context" as the general direction? Such as within an area of the ontology (because - by contrast to LLMs - there is one) or within a reasonning problem, that kind of thing.

By contrast, LLMs for now are embarassing. With inconsistent nonsense provided within one answer or an answer not recognizing the context of the problem. Say, the work domain being a food label and the system not recognizing that or not staying within that.

◧◩◪◨
25. xpe+Jq6[view] [source] [discussion] 2023-09-04 14:25:44
>>skylyz+ZZ1
You are probably referring to OpenCyc. It provides much more value than your comment suggests.

I'd recommend that more people take a look and compare its approach against others. https://en.wikipedia.org/wiki/CycL is compact and worth a read, especially the concept of "microtheories".

[go to top]