zlacker

[return to "Mistral 7B Fine-Tune Optimized"]
1. nickth+Cb[view] [source] 2023-12-20 20:55:18
>>tosh+(OP)
Anytime I see a claim that our 7b models are better than gpt-4 I basically stop reading. If you are going to make that claim, give me several easily digestible examples of this taking place.
◧◩
2. jug+9j[view] [source] 2023-12-20 21:40:03
>>nickth+Cb
What I think they’re claiming is that it’s a base model aimed for further fine tuning, that when further tuned might perform better than GPT-4 on certain tasks.

It’s an argument they make at least as much to market fine tuning as their own model.

This is not a generic model that outperforms another generic model (GPT-4).

That can of course have useful applications because the resource/cost is then comparatively minuscule for certain business use cases.

[go to top]