zlacker
[return to "Mistral 7B Fine-Tune Optimized"]
◧
1. m3kw9+zH
[view]
[source]
2023-12-21 00:17:10
>>tosh+(OP)
I’m really struggling to find a use case for these local models when even ChatGPT 3.5 can do it as good as any of them so far.
◧◩
2. coder5+KK
[view]
[source]
2023-12-21 00:40:06
>>m3kw9+zH
The article shows (fine tuned) Mistral 7B outperforming GPT-4, never mind GPT-3.5.
◧◩◪
3. m3kw9+kR
[view]
[source]
2023-12-21 01:43:12
>>coder5+KK
This model is not close to even 3.5 from when I used it. It first of all does not follow instructions properly and it just runs on and on
[go to top]