zlacker

[parent] [thread] 13 comments
1. jojoba+(OP)[view] [source] 2023-11-18 05:50:45
The moment they lobotomized their flagship AI chatbot into a particular set of political positions the "benefits of all humanity" were out the window.
replies(2): >>lijok+w2 >>emoden+I6
2. lijok+w2[view] [source] 2023-11-18 06:12:11
>>jojoba+(OP)
If they hadn’t done that, would they have been able to get to where they are? Goal oriented teams don’t tend to care about something as inconsequential as this
replies(1): >>Booris+58
3. emoden+I6[view] [source] 2023-11-18 06:56:16
>>jojoba+(OP)
One could quite reasonably dispute the notion that being allowed to generate hate speech or whatever furthers the benefits of all humanity.
replies(2): >>jojoba+L9 >>oska+Qw
◧◩
4. Booris+58[view] [source] [discussion] 2023-11-18 07:08:18
>>lijok+w2
I don't agree with the "noble lie" hypothesis of current AI. That being said I'm not sure why you're couching it that way: they got where they are they got where they are because they spent less time trying to inject safety at a time where capabilities didn't make it unsafe, than their competitors.

Google could have given us GPT-4 if they weren't busy tearing themselves asundre with people convinced a GPT-3 level model was sentient, and now we see OpenAI can't seem to escape that same poison

replies(1): >>lmm+2k
◧◩
5. jojoba+L9[view] [source] [discussion] 2023-11-18 07:23:39
>>emoden+I6
It happily answers what good Obama did during his presidency but refuses to answer about Trump's, for one. Doesn't say "nothing", just gives you a boilerplate about being an LLM and not taking political positions. How much of hate speech would that be?
replies(2): >>Arisak+mb >>jakder+el
◧◩◪
6. Arisak+mb[view] [source] [discussion] 2023-11-18 07:40:09
>>jojoba+L9
I just asked it, and oddly enough answered both questions, listing items and adding "It's important to note that opinions on the success and impact of these actions may vary".

I wouldn't say "refuses to answer" for that.

◧◩◪
7. lmm+2k[view] [source] [discussion] 2023-11-18 08:59:25
>>Booris+58
> Google could have given us GPT-4 if they weren't busy tearing themselves asundre with people convinced a GPT-3 level model was sentient,

Doubt. When was the last time Google showed they had the ability to execute on anything?

replies(1): >>Booris+qA
◧◩◪
8. jakder+el[view] [source] [discussion] 2023-11-18 09:07:55
>>jojoba+L9
>It happily answers what good Obama did

"happily"? wtf?

◧◩
9. oska+Qw[view] [source] [discussion] 2023-11-18 10:47:14
>>emoden+I6
'Hate speech' is not an objective category, nor can a machine feel hate
◧◩◪◨
10. Booris+qA[view] [source] [discussion] 2023-11-18 11:18:04
>>lmm+2k
My comment: "Google could execute if not for <insert thing they're doing wrong>"

How is your comment doubting that? Do you have an alternative reason, or you think they're executing and mistyped?

replies(1): >>lmm+Io3
◧◩◪◨⬒
11. lmm+Io3[view] [source] [discussion] 2023-11-19 04:29:22
>>Booris+qA
Your comment was "Google could execute if not for <thing extremely specific to this particular field>". Given Google's recent track record I think any kind of specific problem like that is at most a symptom; their dysfunction runs a lot deeper.
replies(1): >>Booris+lH3
◧◩◪◨⬒⬓
12. Booris+lH3[view] [source] [discussion] 2023-11-19 07:34:33
>>lmm+Io3
If you think a power structure that allows people to impose their will in a way that doesn't align with delivering value to your end user is "extremely specific to this particular field", I don't think you've reached the table stakes for examining Google's track record.

There's nothing "specific" about being crippled by people pushing an agenda, you'd think the fact this post was about Sam Altman of OpenAI being fired would make that clear enough.

replies(1): >>lmm+7Y5
◧◩◪◨⬒⬓⬔
13. lmm+7Y5[view] [source] [discussion] 2023-11-19 22:10:51
>>Booris+lH3
If you were trying to express "a power structure that allows people to impose their will in a way that doesn't align with delivering value to your end user", writing "tearing themselves asundre with people convinced a GPT-3 level model was sentient" was a very poor way to communicate that.
replies(1): >>Booris+TTa
◧◩◪◨⬒⬓⬔⧯
14. Booris+TTa[view] [source] [discussion] 2023-11-21 01:50:52
>>lmm+7Y5
It's a great way since I'm writing for people who have context. Not everything should be written for the lowest common denominator, and if you lack context you can ask for it instead of going "Doubt. <insert comment making it clear you should have just asked for context>"
[go to top]