zlacker

[parent] [thread] 4 comments
1. Al-Khw+(OP)[view] [source] 2024-02-14 11:03:53
Less compute also means lower cost, though.

I see how most people would prefer a better but slower model when price is equal, but I'm sure many prefer a worse $2/mo model over a better $20/mo model.

replies(1): >>ein0p+xR
2. ein0p+xR[view] [source] 2024-02-14 16:37:23
>>Al-Khw+(OP)
That’s the thing I’m finding so hard to explain. Nobody would ever pay even $2 for a system that is worse at solving the problem. There is some baseline compute you need to deliver certain types of models. Going below that level for lower cost at the expense of accuracy and robustness is a fool’s errand.

In LLMs it’s even worse. To make it concrete, for how I use LLMs I will not only not pay for anything with less capability than GPT4, I won’t even use it for free. It could be that other LLMs could perform well on narrow problems after fine tuning, but even then I’d prefer the model with the highest metrics, not the lowest inference cost.

replies(1): >>sjwhev+WI1
◧◩
3. sjwhev+WI1[view] [source] [discussion] 2024-02-14 20:53:50
>>ein0p+xR
So I think that’s a “your problem isn’t right for the tool” issue, not a “Mistral isn’t capable” issue.
replies(1): >>ein0p+NN1
◧◩◪
4. ein0p+NN1[view] [source] [discussion] 2024-02-14 21:14:30
>>sjwhev+WI1
It isn’t capable unless you have a very specialized task and carefully fine tune to solve just that task. GPT4 covers a lot of ground out of the box. The best model I’ve seen so far on the FOSS side, Mixtral MoE, is less capable than even GPT 3.5. I often submit my requests to both Mixtral and GPT4. If I’m problem solving (learning something, working with code, summarizing, working on my messaging) Mixtral is nearly always a waste of time in comparison.
replies(1): >>sjwhev+kh2
◧◩◪◨
5. sjwhev+kh2[view] [source] [discussion] 2024-02-14 23:59:51
>>ein0p+NN1
Again, that’s precisely what I’m saying. A bounded task is best executed against the smallest possible model at the greatest possible speed. This is true for business factors ($$$) as well as environmental (smaller model -> less carbon).

LLM are not AGI, they are tools that have specific uses we are still discovering.

If you aren’t trying to optimize your accuracy to start with and just saying “I’ll run the most expensive thing and assume it is better” with zero evaluation you’re wasting money, time, and hurting the environment.

Also, I don’t even like running Mistral if I can avoid it - a lot of tasks can be done with a fine tune of BERT or DistilBERT. It takes more work but my custom BERT models way outperform GPT-4 on bounded tasks because I have highly curated training data.

Within specialized domains you just aren’t going to see GPT-4/5/6 performing on par with expert curated data.

[go to top]