>>koie+(OP)
I wonder if separate LLMs can find each other’s logical mistakes. If I ask llama to find the logical mistake in Yi output, would that work better than llama finding a mistake in llama output?
A logical mistake might imply a blind spot inherent to the model, a blind spot that might not be present in all models.