zlacker

[return to "Mistral 7B Fine-Tune Optimized"]
1. nickth+Cb[view] [source] 2023-12-20 20:55:18
>>tosh+(OP)
Anytime I see a claim that our 7b models are better than gpt-4 I basically stop reading. If you are going to make that claim, give me several easily digestible examples of this taking place.
◧◩
2. hospit+cd[view] [source] 2023-12-20 21:04:50
>>nickth+Cb
Some things to note about gpt4:

>Sometimes it will spit out terrible horrid answers. I believe this might be due to time of the day/too many users. They limit tokens.

>Sometimes it will lie because it has alignment

>Sometimes I feel like it tests things on me

So, yes you are right, gpt4 is overall better, but I find myself using local models because I stopped trusting gpt4.

◧◩◪
3. crooke+gf[view] [source] 2023-12-20 21:15:52
>>hospit+cd
Don't forget that ChatGPT 4 also has seasonal depression [1].

[1]: https://twitter.com/RobLynch99/status/1734278713762549970

(Though with that said, the seasonal issue might be common to any LLM with training data annotated by time of year.)

[go to top]