zlacker

[return to "Obituary for Cyc"]
1. vannev+14[view] [source] 2025-04-08 19:44:13
>>todsac+(OP)
I would argue that Lenat was at least directionally correct in understanding that sheer volume of data (in Cyc's case, rules and facts) was the key in eventually achieving useful intelligence. I have to confess that I once criticized the Cyc project for creating an ever-larger pile of sh*t and expecting a pony to emerge, but that's sort of what has happened with LLMs.
◧◩
2. baq+3j[view] [source] 2025-04-08 21:29:24
>>vannev+14
https://ai-2027.com/ postulates that a good enough LLM will rewrite itself using rules and facts... sci-fi, but so is chatting with a matrix multiplication.
◧◩◪
3. joseph+cm[view] [source] 2025-04-08 21:53:49
>>baq+3j
I doubt it. The human mind is a probabilistic computer, at every level. There’s no set definition for what a chair is. It’s fuzzy. Some things are obviously in the category, and some are at the periphery of it. (Eg is a stool a chair? Is a log next to a campfire a chair? How about a tree stump in the woods? Etc). This kind of fuzzy reasoning is the rule, not the exception when it comes to human intuition.

There’s no way to use “rules and facts” to express concepts like “chair” or “grass”, or “face” or “justice” or really anything. Any project trying to use deterministic symbolic logic to represent the world fundamentally misunderstands cognition.

◧◩◪◨
4. woodru+Zb3[view] [source] 2025-04-09 20:45:47
>>joseph+cm
> Any project trying to use deterministic symbolic logic to represent the world fundamentally misunderstands cognition.

The counterposition to this is no more convincing: cognition is fuzzy, but it's not really clear at all that it's probabilistic: I don't look at a stump and ascertain its chairness with a confidence of 85%, for example. The actual meta-cognition of "can I sit on this thing" is more like "it looks sittable, and I can try to sit on it, but if it feels unstable then I shouldn't sit on it." In other words, a defeasible inference.

(There's an entire branch of symbolic logic that models fuzziness without probability: non-monotonic logic[1]. I don't think these get us to AGI either.)

[1]: https://en.wikipedia.org/wiki/Non-monotonic_logic

◧◩◪◨⬒
5. famous+415[view] [source] 2025-04-10 15:19:04
>>woodru+Zb3
>I don't look at a stump and ascertain its chairness with a confidence of 85%

But i think you did. Not consciously, but i think your brain definitely did.

https://www.nature.com/articles/415429a https://pubmed.ncbi.nlm.nih.gov/8891655/

◧◩◪◨⬒⬓
6. woodru+155[view] [source] 2025-04-10 15:38:08
>>famous+415
These papers don't appear to say that: the first one describes the behavior as statistically optimal, which is exactly what you'd expect for a sound set of defeasible relations.

Or intuitively: my ability to determine whether a bird flies or not is definitely going to be statistically optimal, but my underlying cognitive process is not itself inherently statistical: I could be looking at a penguin and remembering that birds fly by default except when they're penguins, and only then if the penguin isn't wearing a jetpack. That's a non-statistical set of relations, but its external observation is modeled statistically.

◧◩◪◨⬒⬓⬔
7. famous+4p5[view] [source] 2025-04-10 17:37:03
>>woodru+155
>which is exactly what you'd expect for a sound set of defeasible relations.

This is a leap. While a complex system of rules might coincidentally produce behavior that looks statistically optimal in some scenarios, the paper (Ernst & Banks) argues that the mechanism itself operates according to statistical principles (MLE), not just that the outcome happens to look that way.

Moreover, it's highly unlikely, bordering on impossible, to reduce the situations the brain deals with even on a daily basis into a set of defeasible statements.

Example: Recognizing a "Dog"

Defeasible Attempt: is_dog(X) :- has_four_legs(X), has_tail(X), barks(X), not is_cat(X), not is_fox(X), not is_robot_dog(X).

is_dog(X) :- has_four_legs(X), wags_tail(X), is_friendly_to_humans(X), not is_wolf(X).

How do you define barks(X) (what about whimpers, growls? What about a dog that doesn't bark?)? How do you handle breeds that look very different (Chihuahua vs. Great Dane)? How do you handle seeing only part of the animal? How do you represent the overall visual gestalt? The number of rules and exceptions quickly becomes vast and brittle.

Ultimately, the proof as they say, is in the pudding. By the way, the CyC we are all talking about here is non-monotonic. https://www.cyc.com/wp-content/uploads/2019/07/First-Orderiz...

If you've tried something for decades and it's not working, and it doesn't even look like it's working and experiments with the brain suggest probabilistic inference and probabilistic inference machines work much better than the alternatives ever did, you have to face the music.

[go to top]