zlacker

[return to "Mistral 7B Fine-Tune Optimized"]
1. nickth+Cb[view] [source] 2023-12-20 20:55:18
>>tosh+(OP)
Anytime I see a claim that our 7b models are better than gpt-4 I basically stop reading. If you are going to make that claim, give me several easily digestible examples of this taking place.
◧◩
2. achill+Dc[view] [source] 2023-12-20 21:01:44
>>nickth+Cb
They can absolutely outperform gpt4 for specific use cases.
◧◩◪
3. nickth+1d[view] [source] 2023-12-20 21:03:55
>>achill+Dc
I am very open to believing that. I'd love to see some examples.
◧◩◪◨
4. buggle+Kj[view] [source] 2023-12-20 21:43:55
>>nickth+1d
You can fine-tune a small model yourself and see. GPT-4 is an amazing general model, but won’t perform the best at every task you throw at it, out of the box. I have a fine-tuned Mistral 7B model that outperforms GPT 4 on a specific type of structured data extraction. Maybe if I fine-tuned GPT-4 it could beat it, but that costs a lot of money for what I can now do locally for the cost of electricity.
[go to top]