~"When told where it's wrong, LLM can correct itself to improve accuracy."
Similar to cheating in chess- a master only needs to be told the value of a few positions to have an advantage.
> recent attempts to self-correct logical or reasoning errors often cause correct answers to become incorrect, resulting in worse performances overall (Huang et al., 2023)
Plus, sometimes the corrections aren't accurate. So of course if you tell it where it's wrong, and it gets a second chance, the error rate will be less...