zlacker

[parent] [thread] 0 comments
1. cmrdpo+(OP)[view] [source] 2023-11-21 05:11:05
Yep. Where you can see them really get tripped up is if there's multiple "levers" or points for potential contradiction. Lots of dependent clauses, chains of predicates that all have to line up for something to make sense. When they get one item wrong, they don't "see" the consequences for the others. And if you get them to correct one, they'll then often turn around and mess up the others.

Because at no point is the "mind" involved doing a step by step reduction of the problem. It doesn't do formal reasoning.

Humans usually don't either, but they can almost all do a form of it when required to. Either under the assistance of a teacher, or in extremis when they need to. We've all had the experience of being flustered, taking a deep breath, and then "working through" something. After spending time with GPT, etc it becomes clear they're not doing that.

It's not that reasoning comes intrinsic to all human thoughts -- we're far lazier than that -- but when we need to, we can usually do it.

[go to top]