zlacker

[return to "Doug Lenat has died"]
1. symbol+nx[view] [source] 2023-09-01 20:53:57
>>snewma+(OP)
Doug Lenat, RIP. I worked at Cycorp in Austin from 2000-2006. Taken from us way too soon, Doug none the less had the opportunity to help our country advance military and intelligence community computer science research.

One day, the rapid advancement of AI via LLMs will slow down and attention will again return to logical reasoning and knowledge representation as championed by the Cyc Project, Cycorp, its cyclists and Dr. Doug Lenat.

Why? If NN inference were so fast, we would compile C programs with it instead of using deductive logical inference that is executed efficiently by the compiler.

◧◩
2. halfli+Iz[view] [source] 2023-09-01 21:11:06
>>symbol+nx
> If NN inference were so fast, we would compile C programs with it instead of using deductive logical inference that is executed efficiently by the compiler.

This is the definition of a strawman. Who is claiming that NN inference is always the fastest way to run computation?

Instead of trying to bring down another technology (neural networks), how about you focus on making symbolic methods usable to solve real-world problems; e.g. how can I build a robust email spam detection system with symbolic methods?

◧◩◪
3. xpe+K11[view] [source] 2023-09-02 01:53:47
>>halfli+Iz
>> If NN inference were so fast, we would compile C programs with it instead of using deductive logical inference that is executed efficiently by the compiler.

> This is the definition of a strawman.

(Actually, it is an example of a strawman.) Anyhow, rather than a strawman, I'd rather us get right into the fundamentals.

1. Feed-forward NN computation ('inference', which is an unfortunate word choice IMO) can provably provide universal function approximation under known conditions. And it can do so efficiently as well, with a lot of recent research getting into both the how and why. One "pays the cost" up-front with training in order to get fast prediction-time performance. The tradeoff is often worth it.

2. Function approximation is not as powerful as Turing completeness. FF NNs are not Turing complete.

3. Deductive chaining is a well-studied, well understood area of algorithms.

4. But... modeling of computational architectures (including processors, caches, busses, and RAM) with sufficient detail to optimize compilation is a hard problem. I wouldn't be surprised if this stretches these algorithms to the limit in terms of what developers will tolerate in terms of compile times. This is a strong incentive, so I'd expect there is at least some research that pushes outside the usual contours here.

[go to top]