There’s no way to use “rules and facts” to express concepts like “chair” or “grass”, or “face” or “justice” or really anything. Any project trying to use deterministic symbolic logic to represent the world fundamentally misunderstands cognition.
The counterposition to this is no more convincing: cognition is fuzzy, but it's not really clear at all that it's probabilistic: I don't look at a stump and ascertain its chairness with a confidence of 85%, for example. The actual meta-cognition of "can I sit on this thing" is more like "it looks sittable, and I can try to sit on it, but if it feels unstable then I shouldn't sit on it." In other words, a defeasible inference.
(There's an entire branch of symbolic logic that models fuzziness without probability: non-monotonic logic[1]. I don't think these get us to AGI either.)
But i think you did. Not consciously, but i think your brain definitely did.
https://www.nature.com/articles/415429a https://pubmed.ncbi.nlm.nih.gov/8891655/
Or intuitively: my ability to determine whether a bird flies or not is definitely going to be statistically optimal, but my underlying cognitive process is not itself inherently statistical: I could be looking at a penguin and remembering that birds fly by default except when they're penguins, and only then if the penguin isn't wearing a jetpack. That's a non-statistical set of relations, but its external observation is modeled statistically.