There’s no way to use “rules and facts” to express concepts like “chair” or “grass”, or “face” or “justice” or really anything. Any project trying to use deterministic symbolic logic to represent the world fundamentally misunderstands cognition.
The counterposition to this is no more convincing: cognition is fuzzy, but it's not really clear at all that it's probabilistic: I don't look at a stump and ascertain its chairness with a confidence of 85%, for example. The actual meta-cognition of "can I sit on this thing" is more like "it looks sittable, and I can try to sit on it, but if it feels unstable then I shouldn't sit on it." In other words, a defeasible inference.
(There's an entire branch of symbolic logic that models fuzziness without probability: non-monotonic logic[1]. I don't think these get us to AGI either.)
What I say is clearly not deterministic for you. You don’t know which word will come next. You have a probability distribution but that’s it. Banana.
I caught a plane yesterday. I knew there would be a plane (since I booked it) and I knew where it would go. Well, except it wasn’t certain. The flight could have been delayed or cancelled. I guess I knew there would be a plane with 90% certainty. I knew the plane would actually fly to my destination with a 98% certainty or something. (There could have been a malfunction midair). But the probability I made it home on time rose significantly when I saw the flight listed, on time, at the airport.
Who I sat next to was far less certain - I ended up sitting next to a 30 year old electrician with a sore neck.
My point is that there is so much reasoning we do all the time that is probabilistic in nature. We don’t even think about it. Other people in this thread are even talking about chairs breaking when you sit on them - every time you sit on a chair there’s a probability calculation you do to decide if the chair is safe, and will support your weight. This is all automatic.
Simple “fuzzy logic” isn’t enough because so many probabilities change as a result of other events. (If the plane is listed on the departures board, the prediction goes up!). All this needs to be modelled by our brains to reason in the world. And we make these calculations constantly with our subconscious. When you walk down the street, you notice who looks dangerous, who is likely to try and interact with you, and all sorts of things.
I think that expert systems - even with some fuzzy logic - are a bad approach because systems never capture all of this reasoning. It’s everywhere all the time. I’m typing on my phone. What is the chance I miss a letter? What is the chance autocorrect fixes each mistake I make? And so on, constantly and forever. Examples are everywhere.