zlacker

[parent] [thread] 3 comments
1. nextos+(OP)[view] [source] 2023-09-01 22:32:24
Exactly. When I hear books such as Paradigms of AI Programming are outdated because of LLMs, I disagree. They are more current than ever, thanks to LLMs!

Neural and symbolic AI will eventually merge. Symbolic models bring much needed efficiency and robustness via regularization.

replies(2): >>mnemon+z4 >>keepam+vn
2. mnemon+z4[view] [source] 2023-09-01 23:08:36
>>nextos+(OP)
If you want to learn about symbolic AI, there are a lot of more recent sources than PAIP (you could try the first half of AI: A Modern Approach by Russel and Norvig), and this has been true for a while.

If you read PAIP today, the most likely reason is that you want a master class in Lisp programming and/or want to learn a lot of tricks for getting good performance out of complex programs (which used to be part of AI and is in many ways being outsourced to hardware today).

None of this is to say you shouldn't read PAIP. You absolutely should. It's awesome. But its role is different now.

replies(1): >>nextos+0b
◧◩
3. nextos+0b[view] [source] [discussion] 2023-09-02 00:24:28
>>mnemon+z4
Some parts of PAIP might be outdated, but it still has really current material on e.g. embedding Prolog in Lisp or building a term-rewriting system. That's relevant for pursuing current neuro-symbolic research, e.g. https://arxiv.org/pdf/2006.08381.pdf.

Other parts like coding an Eliza chatbot are indeed outdated. I have read AIMA and followed a long course that used it, but I didn't really like it. I found it too broad and shallow.

4. keepam+vn[view] [source] 2023-09-02 03:11:15
>>nextos+(OP)
It would be cool if we could find the algorithmic neurological basis for this, the analogy with LLMs being more obvious multi-layer brain circuits, the neurological analogy with symbolic reasoning must exist too.

My hunch is it emerges naturally out of the hierarchical generalization capabilities of multiple layer circuits. But then you need something to coordinate the acquired labels: a tweak on attention perhaps?

Another characteristic is probably some (limited) form of recursion, so the generalized labels emitted at the small end can be fed back in as tokens to be further processed at the big end.

[go to top]