zlacker

[return to "Mistral 7B Fine-Tune Optimized"]
1. nickth+Cb[view] [source] 2023-12-20 20:55:18
>>tosh+(OP)
Anytime I see a claim that our 7b models are better than gpt-4 I basically stop reading. If you are going to make that claim, give me several easily digestible examples of this taking place.
◧◩
2. brucet+oi[view] [source] 2023-12-20 21:34:42
>>nickth+Cb
IDK about GPT4 specifically, but I have recently witnessed a case where small finetuned 7Bs greatly outperformed larger models (Mixtral Instruct, Llama 70B finetunes) in a few very specific tasks.

There is nothing unreasonable about this. However I do dislike it when that information is presented in a fishy way, implying that it "outperforms GPT4" without any qualification.

[go to top]