zlacker

[return to "Obituary for Cyc"]
1. vannev+14[view] [source] 2025-04-08 19:44:13
>>todsac+(OP)
I would argue that Lenat was at least directionally correct in understanding that sheer volume of data (in Cyc's case, rules and facts) was the key in eventually achieving useful intelligence. I have to confess that I once criticized the Cyc project for creating an ever-larger pile of sh*t and expecting a pony to emerge, but that's sort of what has happened with LLMs.
◧◩
2. baq+3j[view] [source] 2025-04-08 21:29:24
>>vannev+14
https://ai-2027.com/ postulates that a good enough LLM will rewrite itself using rules and facts... sci-fi, but so is chatting with a matrix multiplication.
◧◩◪
3. joseph+cm[view] [source] 2025-04-08 21:53:49
>>baq+3j
I doubt it. The human mind is a probabilistic computer, at every level. There’s no set definition for what a chair is. It’s fuzzy. Some things are obviously in the category, and some are at the periphery of it. (Eg is a stool a chair? Is a log next to a campfire a chair? How about a tree stump in the woods? Etc). This kind of fuzzy reasoning is the rule, not the exception when it comes to human intuition.

There’s no way to use “rules and facts” to express concepts like “chair” or “grass”, or “face” or “justice” or really anything. Any project trying to use deterministic symbolic logic to represent the world fundamentally misunderstands cognition.

◧◩◪◨
4. woodru+Zb3[view] [source] 2025-04-09 20:45:47
>>joseph+cm
> Any project trying to use deterministic symbolic logic to represent the world fundamentally misunderstands cognition.

The counterposition to this is no more convincing: cognition is fuzzy, but it's not really clear at all that it's probabilistic: I don't look at a stump and ascertain its chairness with a confidence of 85%, for example. The actual meta-cognition of "can I sit on this thing" is more like "it looks sittable, and I can try to sit on it, but if it feels unstable then I shouldn't sit on it." In other words, a defeasible inference.

(There's an entire branch of symbolic logic that models fuzziness without probability: non-monotonic logic[1]. I don't think these get us to AGI either.)

[1]: https://en.wikipedia.org/wiki/Non-monotonic_logic

◧◩◪◨⬒
5. joseph+Tx3[view] [source] 2025-04-09 23:30:41
>>woodru+Zb3
Which word will I pick next in this sentence? Is it deterministic? I probably wouldn’t respond the same way if I wrote this comment in a different mood, or at a different time of day.

What I say is clearly not deterministic for you. You don’t know which word will come next. You have a probability distribution but that’s it. Banana.

I caught a plane yesterday. I knew there would be a plane (since I booked it) and I knew where it would go. Well, except it wasn’t certain. The flight could have been delayed or cancelled. I guess I knew there would be a plane with 90% certainty. I knew the plane would actually fly to my destination with a 98% certainty or something. (There could have been a malfunction midair). But the probability I made it home on time rose significantly when I saw the flight listed, on time, at the airport.

Who I sat next to was far less certain - I ended up sitting next to a 30 year old electrician with a sore neck.

My point is that there is so much reasoning we do all the time that is probabilistic in nature. We don’t even think about it. Other people in this thread are even talking about chairs breaking when you sit on them - every time you sit on a chair there’s a probability calculation you do to decide if the chair is safe, and will support your weight. This is all automatic.

Simple “fuzzy logic” isn’t enough because so many probabilities change as a result of other events. (If the plane is listed on the departures board, the prediction goes up!). All this needs to be modelled by our brains to reason in the world. And we make these calculations constantly with our subconscious. When you walk down the street, you notice who looks dangerous, who is likely to try and interact with you, and all sorts of things.

I think that expert systems - even with some fuzzy logic - are a bad approach because systems never capture all of this reasoning. It’s everywhere all the time. I’m typing on my phone. What is the chance I miss a letter? What is the chance autocorrect fixes each mistake I make? And so on, constantly and forever. Examples are everywhere.

◧◩◪◨⬒⬓
6. woodru+4G4[view] [source] 2025-04-10 13:15:44
>>joseph+Tx3
To be clear, I agree that this is why expert systems fail. My point was only that non-monotonic logics and probability have equal explanatory power when it comes to unpredictability: the latter models with probability, and the former models with relations and defeasible defaults.

This is why I say the meta-cognitive explanation is important: I don’t think most people assign actual probabilities to events in their lives, and certainly not rigorous ones in any case. Instead, when people use words like “likely” and “unlikely,” they’re typically expressing a defeasible statement (“typically, a stranger who approaches me on the street is going to ask me for money, but if they’re wearing a suit they’re typically a Jehovah’s Witness instead”).

◧◩◪◨⬒⬓⬔
7. joseph+XW4[view] [source] 2025-04-10 15:00:11
>>woodru+4G4
> I don’t think most people assign actual probabilities to events in their lives, and certainly not rigorous ones in any case.

Interesting. I don't think I agree.

I think people do assign actual probabilities to events. We just do it with a different part of our brain than the part which understands what numbers are. You can tell you do that by thinking through potential bets. For example, if someone (with no special knowledge) offered a 50/50 bet that your dining chair will break next time you sit on it, well, that sounds like a safe bet! Easy money! What about if the odds changed - so, if it breaks you give them $60, and if it doesn't break they give you $40? I'd still take that bet. What about 100-1 odds? 1000-1? There's some point where you start to say "no no, I don't want to take that bet." or even "I'd take that bet if we swap sides".

Somewhere in our minds, we hold an intuition around the probability of different events. But I think it takes a bit of work to turn that intuition into a specific number. We use that intuition for a lot of things - like, to calibrate how much surprise we feel when our expectation is violated. And to intuitively decide how much we should think through all the alternatives. If we place a bet on a coin flip, I'll think through what happens if the coin comes up heads or if it comes up tails. But if I walk into the kitchen, I don't think about the case that I accidentally stub my toe. My intuition assigns that a low enough probability that I don't think about it.

Talking about defeasible statements only scratches the surface of how complex our conditional probability reasoning is. In one sense, a transformer model is just that - an entire transformer based LLM is just a conditional probability reasoning system. The entire model of however many billions of parameters is all a big conditional probability reasoning machine who's only task is to figure out the probability distribution over the subsequent token in a stream. And 100bn parameter models are clearly still too small to hit the sweet spot. They keep getting smarter as we add more tokens. If you torture an LLM model a little, you can even get it to spit out exact probability predictions. Just like our human minds.

I think these kind of expert systems fail because they can't do the complex probability reasoning that transformer models do. (And if they could, it would be impossible to manually write out the - perhaps billions - of rules it would need to accurately reason about the world like chatgpt can.)

[go to top]