zlacker

[return to "Mistral 7B Fine-Tune Optimized"]
1. nickth+Cb[view] [source] 2023-12-20 20:55:18
>>tosh+(OP)
Anytime I see a claim that our 7b models are better than gpt-4 I basically stop reading. If you are going to make that claim, give me several easily digestible examples of this taking place.
◧◩
2. hospit+cd[view] [source] 2023-12-20 21:04:50
>>nickth+Cb
Some things to note about gpt4:

>Sometimes it will spit out terrible horrid answers. I believe this might be due to time of the day/too many users. They limit tokens.

>Sometimes it will lie because it has alignment

>Sometimes I feel like it tests things on me

So, yes you are right, gpt4 is overall better, but I find myself using local models because I stopped trusting gpt4.

◧◩◪
3. moffka+8f[view] [source] 2023-12-20 21:15:21
>>hospit+cd
How are local models better in terms of trust? GPT 4 is the only model I've seen actually tuned to say no when it doesn't have the information being asked for. Though I do agree it used to run better earlier this year.

The best open source has to offer is Mixtral that will confidently make up a biography of a person it's never heard of before or write a script with nonexistant libraries.

[go to top]