~"When told where it's wrong, LLM can correct itself to improve accuracy."
Similar to cheating in chess- a master only needs to be told the value of a few positions to have an advantage.
> recent attempts to self-correct logical or reasoning errors often cause correct answers to become incorrect, resulting in worse performances overall (Huang et al., 2023)