It's the same pattern you'd see in a pedagological article about correcting reasoning errors, except that it's able to generate some share of the article content on its own.
With more layers of post-processing behind a curtain, you might be able to build an assembly over this behavior that looked convincingly like it was correcting reasoning errors on its own.
So... yes and no.
Because at no point is the "mind" involved doing a step by step reduction of the problem. It doesn't do formal reasoning.
Humans usually don't either, but they can almost all do a form of it when required to. Either under the assistance of a teacher, or in extremis when they need to. We've all had the experience of being flustered, taking a deep breath, and then "working through" something. After spending time with GPT, etc it becomes clear they're not doing that.
It's not that reasoning comes intrinsic to all human thoughts -- we're far lazier than that -- but when we need to, we can usually do it.