zlacker

[return to "Obituary for Cyc"]
1. vannev+14[view] [source] 2025-04-08 19:44:13
>>todsac+(OP)
I would argue that Lenat was at least directionally correct in understanding that sheer volume of data (in Cyc's case, rules and facts) was the key in eventually achieving useful intelligence. I have to confess that I once criticized the Cyc project for creating an ever-larger pile of sh*t and expecting a pony to emerge, but that's sort of what has happened with LLMs.
◧◩
2. baq+3j[view] [source] 2025-04-08 21:29:24
>>vannev+14
https://ai-2027.com/ postulates that a good enough LLM will rewrite itself using rules and facts... sci-fi, but so is chatting with a matrix multiplication.
◧◩◪
3. joseph+cm[view] [source] 2025-04-08 21:53:49
>>baq+3j
I doubt it. The human mind is a probabilistic computer, at every level. There’s no set definition for what a chair is. It’s fuzzy. Some things are obviously in the category, and some are at the periphery of it. (Eg is a stool a chair? Is a log next to a campfire a chair? How about a tree stump in the woods? Etc). This kind of fuzzy reasoning is the rule, not the exception when it comes to human intuition.

There’s no way to use “rules and facts” to express concepts like “chair” or “grass”, or “face” or “justice” or really anything. Any project trying to use deterministic symbolic logic to represent the world fundamentally misunderstands cognition.

◧◩◪◨
4. woodru+Zb3[view] [source] 2025-04-09 20:45:47
>>joseph+cm
> Any project trying to use deterministic symbolic logic to represent the world fundamentally misunderstands cognition.

The counterposition to this is no more convincing: cognition is fuzzy, but it's not really clear at all that it's probabilistic: I don't look at a stump and ascertain its chairness with a confidence of 85%, for example. The actual meta-cognition of "can I sit on this thing" is more like "it looks sittable, and I can try to sit on it, but if it feels unstable then I shouldn't sit on it." In other words, a defeasible inference.

(There's an entire branch of symbolic logic that models fuzziness without probability: non-monotonic logic[1]. I don't think these get us to AGI either.)

[1]: https://en.wikipedia.org/wiki/Non-monotonic_logic

◧◩◪◨⬒
5. famous+415[view] [source] 2025-04-10 15:19:04
>>woodru+Zb3
>I don't look at a stump and ascertain its chairness with a confidence of 85%

But i think you did. Not consciously, but i think your brain definitely did.

https://www.nature.com/articles/415429a https://pubmed.ncbi.nlm.nih.gov/8891655/

[go to top]