zlacker

[return to "LLMs cannot find reasoning errors, but can correct them"]
1. valine+ke[view] [source] 2023-11-20 20:28:09
>>koie+(OP)
I wonder if separate LLMs can find each other’s logical mistakes. If I ask llama to find the logical mistake in Yi output, would that work better than llama finding a mistake in llama output?

A logical mistake might imply a blind spot inherent to the model, a blind spot that might not be present in all models.

◧◩
2. sevagh+bn[view] [source] 2023-11-20 21:03:37
>>valine+ke
I frequently share responses between ChatGPT (paid version with GPT4) and Copilot-X to break an impasse when trying to generate or fix a tricky piece of code.
[go to top]