zlacker

[return to "LLMs cannot find reasoning errors, but can correct them"]
1. valine+ke[view] [source] 2023-11-20 20:28:09
>>koie+(OP)
I wonder if separate LLMs can find each other’s logical mistakes. If I ask llama to find the logical mistake in Yi output, would that work better than llama finding a mistake in llama output?

A logical mistake might imply a blind spot inherent to the model, a blind spot that might not be present in all models.

◧◩
2. EricMa+gk[view] [source] 2023-11-20 20:52:20
>>valine+ke
wouldn't this effectively be using a "model" twice the size?

Would it be better to just double the size of one of the models rather than house both?

Genuine question

◧◩◪
3. sevagh+043[view] [source] 2023-11-21 15:37:44
>>EricMa+gk
I believe another factor is that sometimes the model responds better to your prompt than other times. This way you get two dice rolls of your prompt hitting "the good path."
[go to top]