It can "correct" because it just goes out and finds and produces a pattern template that matches the problem better/different (often just different, and it fails in new ways, in my experience). It never used reasoning to find the answer in the first place, and doesn't use reason to find the corrected answer.
The papers referenced here get into this: https://cacm.acm.org/blogs/blog-cacm/276268-can-llms-really-...