zlacker

[return to ""]
1. skepti+(OP)[view] [source] 2024-02-14 02:35:28
>>mfigui+M3
Frankly, OpenAI seems to be losing its luster, and fast.

Plugins were a failure. GPTs are a little better, but I still don't see the product market fit. GPT-4 is still king, but not by that much any more. It's not even clear that they're doing great research, because they don't publish.

GPT-5 has to be incredibly good at this point, and I'm not sure that it will be.

2. mfigui+M3[view] [source] 2024-02-14 03:08:18
3. roody1+Sf[view] [source] 2024-02-14 04:59:52
>>skepti+(OP)
Running Ollama with a 80gb mistral model works as well if not better than ChatGPT 3.5. This is a good thing for the world IMO as the magic is no longer held just OpenAI. The speed at which competitors have caught up in even the last 3 months is astounding.
◧◩
4. huyter+5h[view] [source] 2024-02-14 05:16:30
>>roody1+Sf
But no one cares about 3.5. It’s an order of magnitude worse than 4. An order of magnitude is a lot harder to catch up with.
◧◩◪
5. sjwhev+il[view] [source] 2024-02-14 06:10:42
>>huyter+5h
What Mistral has though is speed, and with speed comes scale.
◧◩◪◨
6. spacem+Tm[view] [source] 2024-02-14 06:31:21
>>sjwhev+il
Who cares about speed if you’re wrong?

This isn’t a race to write the most lines of code or the most lines of text. It’s a race to write the most correct lines of code.

I’ll wait half an hour for a response if I know I’m getting at least staff engineer level tier of code for every question

◧◩◪◨⬒
7. ein0p+lx[view] [source] 2024-02-14 08:33:18
>>spacem+Tm
That’s the correct answer. Years ago I worked on inference efficiency on edge hardware at a startup. Time after time I saw that users vastly prefer slower, but more accurate and robust systems. Put succinctly: nobody cares how quick a model is if it doesn’t do a good job. Another thing I discovered is it can be very difficult to convince software engineers of this obvious fact.
◧◩◪◨⬒⬓
8. Al-Khw+WJ[view] [source] 2024-02-14 11:03:53
>>ein0p+lx
Less compute also means lower cost, though.

I see how most people would prefer a better but slower model when price is equal, but I'm sure many prefer a worse $2/mo model over a better $20/mo model.

◧◩◪◨⬒⬓⬔
9. ein0p+tB1[view] [source] 2024-02-14 16:37:23
>>Al-Khw+WJ
That’s the thing I’m finding so hard to explain. Nobody would ever pay even $2 for a system that is worse at solving the problem. There is some baseline compute you need to deliver certain types of models. Going below that level for lower cost at the expense of accuracy and robustness is a fool’s errand.

In LLMs it’s even worse. To make it concrete, for how I use LLMs I will not only not pay for anything with less capability than GPT4, I won’t even use it for free. It could be that other LLMs could perform well on narrow problems after fine tuning, but even then I’d prefer the model with the highest metrics, not the lowest inference cost.

◧◩◪◨⬒⬓⬔⧯
10. sjwhev+Ss2[view] [source] 2024-02-14 20:53:50
>>ein0p+tB1
So I think that’s a “your problem isn’t right for the tool” issue, not a “Mistral isn’t capable” issue.
[go to top]