zlacker

[parent] [thread] 7 comments
1. halfli+(OP)[view] [source] 2023-09-01 21:11:06
> If NN inference were so fast, we would compile C programs with it instead of using deductive logical inference that is executed efficiently by the compiler.

This is the definition of a strawman. Who is claiming that NN inference is always the fastest way to run computation?

Instead of trying to bring down another technology (neural networks), how about you focus on making symbolic methods usable to solve real-world problems; e.g. how can I build a robust email spam detection system with symbolic methods?

replies(4): >>symbol+I3 >>detour+48 >>xpe+2s >>xpe+3t
2. symbol+I3[view] [source] 2023-09-01 21:38:16
>>halfli+(OP)
The point is that symbolic computation as performed by Cycorp was held back by the need to train the Knowledge Base by hand in a supervised manner. NNs and LLMs in particular became ascendant when unsupervised training was employed at scale.

Perhaps LLMs can automate in large part the manual operations of building a future symbolic knowledge base organized by a universal upper ontology. Considering the amazing emergent features of sufficiently-large LLMs, what could emerge from a sufficiently large, reflective symbolic knowledge base?

3. detour+48[view] [source] 2023-09-01 22:15:08
>>halfli+(OP)
That what I have settled on. The need for a symbolic library of standard hardware circuits.

I’m making a sloppy version that will contain all the symbols needed to run a multi-unit building.

4. xpe+2s[view] [source] 2023-09-02 01:53:47
>>halfli+(OP)
>> If NN inference were so fast, we would compile C programs with it instead of using deductive logical inference that is executed efficiently by the compiler.

> This is the definition of a strawman.

(Actually, it is an example of a strawman.) Anyhow, rather than a strawman, I'd rather us get right into the fundamentals.

1. Feed-forward NN computation ('inference', which is an unfortunate word choice IMO) can provably provide universal function approximation under known conditions. And it can do so efficiently as well, with a lot of recent research getting into both the how and why. One "pays the cost" up-front with training in order to get fast prediction-time performance. The tradeoff is often worth it.

2. Function approximation is not as powerful as Turing completeness. FF NNs are not Turing complete.

3. Deductive chaining is a well-studied, well understood area of algorithms.

4. But... modeling of computational architectures (including processors, caches, busses, and RAM) with sufficient detail to optimize compilation is a hard problem. I wouldn't be surprised if this stretches these algorithms to the limit in terms of what developers will tolerate in terms of compile times. This is a strong incentive, so I'd expect there is at least some research that pushes outside the usual contours here.

5. xpe+3t[view] [source] 2023-09-02 02:07:37
>>halfli+(OP)
> Instead of trying to bring down another technology (neural networks), how about you focus on making symbolic methods usable to solve real-world problems; e.g. how can I build a robust email spam detection system with symbolic methods?

I have two concerns. First, just after pointing out a logical fallacy from someone else, you added a fallacy: the either-or fallacy. (One can criticize a technology and do other things too.)

Second, you selected an example that illustrates a known and predictable weakness of symbolic systems. Still, there are plenty of real-world problems that symbolic systems address well. So your comment cherry-picks.

It appears as if you are trying to land a counter punch here. I'm weary of this kind of conversational pattern. Many of us know that tends to escalate. I don't want HN to go that direction. We all have varying experience and points of view to contribute. Let's try to be charitable, clear, and logical.

replies(1): >>Neverm+zv
◧◩
6. Neverm+zv[view] [source] [discussion] 2023-09-02 02:45:44
>>xpe+3t
I am desperately vetting your comment for something I can criticize. An inadvertent, irrelevant, imagined infraction. Anything! But you have left me no opening.

Well done, sir, well done.

replies(1): >>xpe+Dw
◧◩◪
7. xpe+Dw[view] [source] [discussion] 2023-09-02 03:01:33
>>Neverm+zv
Thanks, but if I didn't blunder here, I can assure you I have in many other places. I strive to be mindful. I try not to "blame" anyone for strong reactions. But when we see certain unhelpful behaviors directed at other people, I try to identify/name it without making it worse. Awareness helps.
replies(1): >>Neverm+nF
◧◩◪◨
8. Neverm+nF[view] [source] [discussion] 2023-09-02 05:30:40
>>xpe+Dw
Without awareness we are just untagged data in a sea of uncompressed noise.
[go to top]