zlacker

[return to "LLMs cannot find reasoning errors, but can correct them"]
1. valine+ke[view] [source] 2023-11-20 20:28:09
>>koie+(OP)
I wonder if separate LLMs can find each other’s logical mistakes. If I ask llama to find the logical mistake in Yi output, would that work better than llama finding a mistake in llama output?

A logical mistake might imply a blind spot inherent to the model, a blind spot that might not be present in all models.

◧◩
2. EricMa+gk[view] [source] 2023-11-20 20:52:20
>>valine+ke
wouldn't this effectively be using a "model" twice the size?

Would it be better to just double the size of one of the models rather than house both?

Genuine question

◧◩◪
3. rainco+Cy[view] [source] 2023-11-20 21:53:27
>>EricMa+gk
I think the relationship between model size and training time isn't linear. So if you want a twice bigger model it'll take more resources to train it than two original models.
[go to top]