zlacker

[parent] [thread] 8 comments
1. g42gre+(OP)[view] [source] 2023-11-22 06:38:27
Why would society at large suffer from a major flaw in GPT-4, if it's even there? If GPT-4 spits out some nonsense to your customers, just put a filter on it, as you should anyway. We can't seriously expect OpenAI to babysit every company out there, can we? Why would we even want to?
replies(3): >>Terrif+A1 >>dontup+yA >>cyanyd+DJ
2. Terrif+A1[view] [source] 2023-11-22 06:48:23
>>g42gre+(OP)
For example, and I'm not saying such flaws exist, GPT4 output is bias in some way, encourages radicalization (see Twitter's, YouTube's, and Facebook's news feed algorithm), create self-esteem issues in children (see Instagram), ... etc.

If you worked for old OpenAI, you would be free to talk about it - since old OpenAI didn't give a crap about profit.

Altman's OpenAI? He will want you to "go to him first".

replies(4): >>g42gre+Z2 >>nearbu+od >>kgeist+Al >>dontup+cB
◧◩
3. g42gre+Z2[view] [source] [discussion] 2023-11-22 06:57:51
>>Terrif+A1
We can't expect GPT-4 not to have bias in some way, or not to have all these things that you mentioned. I read in multiple places that GPT products have "progressive" bias. If that's Ok with you, then you just use it with that bias. If not, you fix it by pre-prompting, etc... If you can't fix it, use LLAMA or something else. That's the entrepreneur's problem, not OpenAI's. OpenAI needs to make it intelligent and capable. The entrepreneurs and business users will do the rest. That's how they get paid. If OpenAI to solve all these problems, what business users are going to do themselves? I just don't see the societal harm here.
◧◩
4. nearbu+od[view] [source] [discussion] 2023-11-22 08:13:22
>>Terrif+A1
Concerns about bias and racism in ChatGPT would feel more valid if ChatGPT were even one tenth as bias as anything else in life. Twitter, Facebook, the media, friends and family, etc. are all more bias and radicalized (though I mean "radicalized" in a mild sense) than ChatGPT. Talk to anyone on any side about the war in Gaza and you'll get a bunch of opinions that the opposite side will say are blatantly racist. ChatGPT will just say something inoffensive like it's a complex and sensitive issue and that it's not programmed to have political opinions.
◧◩
5. kgeist+Al[view] [source] [discussion] 2023-11-22 09:20:22
>>Terrif+A1
GPT3/GPT4 currently moralize about anything slightly controversial. Sure you can construct a long elaborate prompt to "jailbreak" it, but it's so much effort it's easier to just write something by yourself.
6. dontup+yA[view] [source] 2023-11-22 11:31:08
>>g42gre+(OP)
>If GPT-4 spits out some nonsense to your customers, just put a filter on it, as you should anyway.

Languages other than English exist, and RLHF at least does work in any language you make the request in. regex/nlp, not so much.

replies(1): >>g42gre+jA1
◧◩
7. dontup+cB[view] [source] [discussion] 2023-11-22 11:37:42
>>Terrif+A1
>Encourages radicalization (see Twitter's, YouTube's, and Facebook's news feed algorithm)

What do you mean? It recommends things that it thinks people will like.

Also I highly suspect "Altman's OpenAI" is dead regardless. They are now Copilot(tm) Research.

They may have delusions of grandeur regarding being able to resist the MicroBorg or change it from the inside, but that simply does not happen.

The best they can hope for as an org is to live as long as they can as best as they can.

I think Sam's 100B silicon gambit in the middle east (quite curious because this is probably something the United State Federal Government Is Likely Not Super Fond Of) is him realizing that, while he is influential and powerful, he's nowhere near MSFT level.

8. cyanyd+DJ[view] [source] 2023-11-22 12:41:07
>>g42gre+(OP)
Because real people are using it to make decisions. Decisions that could be entirely skewed in some direction, and often that causes damage.
◧◩
9. g42gre+jA1[view] [source] [discussion] 2023-11-22 16:53:26
>>dontup+yA
No regex, you would use another copy of few-shot prompted GPT-4 as a filter for the first GPT-4!
[go to top]