zlacker

[return to ""]
1. skepti+(OP)[view] [source] 2024-02-14 02:35:28
>>mfigui+M3
Frankly, OpenAI seems to be losing its luster, and fast.

Plugins were a failure. GPTs are a little better, but I still don't see the product market fit. GPT-4 is still king, but not by that much any more. It's not even clear that they're doing great research, because they don't publish.

GPT-5 has to be incredibly good at this point, and I'm not sure that it will be.

2. mfigui+M3[view] [source] 2024-02-14 03:08:18
3. sho+Wi[view] [source] 2024-02-14 05:39:03
>>skepti+(OP)
> GPT-4 is still king, but not by that much any more

Idk, I just tried Gemini Ultra and it's so much worse than GPT4 that I am actually quite shocked. Trying to ask it any kind of coding question ends up being this frustrating and honestly bizarre waste of time as it hallucinates a whole new language syntax every time and then asks if you want to continue with non-working, in fact non-existing, option A or the equally non-existent option B until you realise that you've spent an hour trying to make it at least output something that is even in the requested language and finally that it is completely useless.

I'm actually pretty astonished at how far Google is behind and that they released such a bunch of worthless junk at all. And have the chutzpah to ask people to pay for it!

Of course I'm looking forward to gpt-5 but even if it's only a minor step up, they're still way ahead.

◧◩
4. pb7+mj[view] [source] 2024-02-14 05:41:52
>>sho+Wi
Do you have example links?
◧◩◪
5. sho+Gj[view] [source] 2024-02-14 05:47:20
>>pb7+mj
here was one of them https://gemini.google.com/share/fde31202b221?hl=en

edit: as pointed out, this was indeed a pretty esoteric example. But the rest of my attempts were hardly better, if they had a response at all.

◧◩◪◨
6. peddli+yk[view] [source] 2024-02-14 05:58:56
>>sho+Gj
That’s an awfully specific and esoteric question. Would you expect gpt4 to be significantly better at that level of depth? That’s not been my experience.
◧◩◪◨⬒
7. sho+pl[view] [source] 2024-02-14 06:12:47
>>peddli+yk
OK, i have to admit that one was a little odd, I was beginning to give up and trying new angles. I can't really share my other sessions. But I was trying to get a handle on the language and thought it would be an easily-understood situation (multiple-token auth). I would have at least expected the response to be slightly valid.

The language in question was only open sourced after GPT4's training date, so i couldn't compare. That's actually why I tried it in the first place. And yes, I do expect it to be better - GPT4 isn't perfect but I don't really it ever hallucinating quite that hard. In fact, its answer was basically that it didn't know.

And when I asked it questions with other, much less esoteric code like "how would you refactor this to be more idiomatic?" I'd get either "I couldn't complete your request. Rephrase your prompt and try again." or "Sorry, I can't help with that because there's too much data. Try again with less data." GPT-4 was helpful in both cases.

[go to top]