zlacker

[return to "LLMs cannot find reasoning errors, but can correct them"]
1. kromem+UE[view] [source] 2023-11-20 22:29:21
>>koie+(OP)
Stop doing self-correction within the context of the model's own generation.

The previous paper on self correction told the model "you previously said X - are there errors with this?"

This one has the mistakes statically added to the prompt in a task prompt and response without additional context immediately before asking if it has any errors.

Think about the training data.

How often does the training data of most of the Internet reflect users identifying issues with their own output?

How often does the training data reflect users identifying issues with someone else's output?

Try doing self-correction by setting up the context of "this was someone else's answer". It is still technically self-correction if a model is reviewing its own output in that context - it just isn't set up as "correct your own answer."

This may even be part of why the classifier did a better job at identifying issues - less the fine tuning and more the context (unfortunately I don't see the training/prompts for the classifier in their GitHub repo).

It really seems like the aversion to anthropomorphizing LLMs is leading people to ignore or overlook relevant patterns in the highly anthropomorphic training data fed into them. We might not want to entertain that a LLM has a concept of self vs other or a bias between critiques based on such a differentiation, and yet the training data almost certainly reflects such a concept and bias.

I'd strongly encourage future work on self-correction to explicitly define the thing being evaluated as the work of another. (Or ideally even compare self-correction rates between critiques in the context of their own output vs another's output.)

◧◩
2. NoToP+fG1[view] [source] 2023-11-21 05:30:26
>>kromem+UE
With due respect (and I actually mean due respect), this embodies exactly what is wrong with the modern approach to AI. Who cares that there's no examples in the training set. True AI should be able of taking a few steps out of book without getting flummoxed. When you learn your first language, teacher does not stand before the class and provide examples of ungrammatical statements, yet you figure out the rules of grammar just fine.

There is something fundamentally flawed in the approach not in the data.

[go to top]