zlacker

[return to ""]
1. skepti+(OP)[view] [source] 2024-02-14 02:35:28
>>mfigui+M3
Frankly, OpenAI seems to be losing its luster, and fast.

Plugins were a failure. GPTs are a little better, but I still don't see the product market fit. GPT-4 is still king, but not by that much any more. It's not even clear that they're doing great research, because they don't publish.

GPT-5 has to be incredibly good at this point, and I'm not sure that it will be.

2. mfigui+M3[view] [source] 2024-02-14 03:08:18
3. al_bor+Po[view] [source] 2024-02-14 06:56:06
>>skepti+(OP)
I know things keep moving faster and faster, especially in this space, but GPT-4 is less than a year old. Claiming they are losing their luster, because they aren’t shaking the earth with new models every quarter, seems a little ridiculous.

As the popularity has exploded, and ethical questions have become increasingly relevant, it is probably worth taking some time to nail certain aspects down before releasing everything to the public for the sake of being first.

◧◩
4. optymi+d41[view] [source] 2024-02-14 13:59:41
>>al_bor+Po
I never bought into ethical questions. It's trained on publicly available data as far as I understand. What's the most unethical thing it can do?

My experience is limited. I got it to berate me with a jailbreak. I asked it to do so, so the onus is on me to be able to handle the response.

I'm trying to think of unethical things it can do that are not in the realm of "you asked it for that information, just as you would have searched on Google", but I can only think of things like "how to make a bomb", suicide related instructions, etc which I would place in the "sharp knife" category. One has to be able to handle it before using it.

It's been increasingly giving the canned "As an AI language model ..." response for stuff that's not even unethical, just dicey, for example.

◧◩◪
5. al_bor+j91[view] [source] 2024-02-14 14:27:53
>>optymi+d41
One recent example in the news was the AI generated p*rn of Taylor Swift. From what I read, the people who made it used Bing, which is based on OpenAI’s tech.
◧◩◪◨
6. loboci+Aa1[view] [source] 2024-02-14 14:35:27
>>al_bor+j91
This is more sensationalism than ethical issue. Whatever they did they could do, and probably do better, using publicly available tools like Stable Diffusion.
◧◩◪◨⬒
7. majora+Gf1[view] [source] 2024-02-14 15:00:41
>>loboci+Aa1
or just photoshop. The only thing these tools did was make it easier. I don't think the AI aspect adds anything for this comparison.
◧◩◪◨⬒⬓
8. Anon84+Th1[view] [source] 2024-02-14 15:11:45
>>majora+Gf1
An argument can be made that "more is different." By making it easier to do something, you're increasing the supply, possibly even taking something that used to be a rare edge case and making it a common occurrence, which can pose problems in and of itself.
[go to top]