zlacker
[return to "Mistral 7B Fine-Tune Optimized"]
◧
1. nickth+Cb
[view]
[source]
2023-12-20 20:55:18
>>tosh+(OP)
Anytime I see a claim that our 7b models are better than gpt-4 I basically stop reading. If you are going to make that claim, give me several easily digestible examples of this taking place.
◧◩
2. achill+Dc
[view]
[source]
2023-12-20 21:01:44
>>nickth+Cb
They can absolutely outperform gpt4 for specific use cases.
◧◩◪
3. nickth+1d
[view]
[source]
2023-12-20 21:03:55
>>achill+Dc
I am very open to believing that. I'd love to see some examples.
◧◩◪◨
4. GaggiX+te
[view]
[source]
2023-12-20 21:11:03
>>nickth+1d
Well it's pretty easy to find examples online, this one using Llama 2, not even Mistral or fancy techniques:
https://www.anyscale.com/blog/fine-tuning-llama-2-a-comprehe...
[go to top]