zlacker

[return to ""]
1. skepti+(OP)[view] [source] 2024-02-14 02:35:28
>>mfigui+M3
Frankly, OpenAI seems to be losing its luster, and fast.

Plugins were a failure. GPTs are a little better, but I still don't see the product market fit. GPT-4 is still king, but not by that much any more. It's not even clear that they're doing great research, because they don't publish.

GPT-5 has to be incredibly good at this point, and I'm not sure that it will be.

2. mfigui+M3[view] [source] 2024-02-14 03:08:18
3. roody1+Sf[view] [source] 2024-02-14 04:59:52
>>skepti+(OP)
Running Ollama with a 80gb mistral model works as well if not better than ChatGPT 3.5. This is a good thing for the world IMO as the magic is no longer held just OpenAI. The speed at which competitors have caught up in even the last 3 months is astounding.
◧◩
4. huyter+5h[view] [source] 2024-02-14 05:16:30
>>roody1+Sf
But no one cares about 3.5. It’s an order of magnitude worse than 4. An order of magnitude is a lot harder to catch up with.
◧◩◪
5. sjwhev+il[view] [source] 2024-02-14 06:10:42
>>huyter+5h
What Mistral has though is speed, and with speed comes scale.
◧◩◪◨
6. spacem+Tm[view] [source] 2024-02-14 06:31:21
>>sjwhev+il
Who cares about speed if you’re wrong?

This isn’t a race to write the most lines of code or the most lines of text. It’s a race to write the most correct lines of code.

I’ll wait half an hour for a response if I know I’m getting at least staff engineer level tier of code for every question

◧◩◪◨⬒
7. ein0p+lx[view] [source] 2024-02-14 08:33:18
>>spacem+Tm
That’s the correct answer. Years ago I worked on inference efficiency on edge hardware at a startup. Time after time I saw that users vastly prefer slower, but more accurate and robust systems. Put succinctly: nobody cares how quick a model is if it doesn’t do a good job. Another thing I discovered is it can be very difficult to convince software engineers of this obvious fact.
◧◩◪◨⬒⬓
8. spacec+9C[view] [source] 2024-02-14 09:18:44
>>ein0p+lx
Having spent time on edge compute projects. This.

Also, all the evidence is in this thread. Clearly people unhappy with wasting time on LLMs, when the time that was wasted was the result of obviously bad output.

[go to top]