zlacker

[return to "OpenAI's board has fired Sam Altman"]
1. convex+C01[view] [source] 2023-11-18 01:11:18
>>davidb+(OP)
Kara Swisher: a “misalignment” of the profit versus nonprofit adherents at the company https://twitter.com/karaswisher/status/1725678074333635028

She also says that there will be many more top employees leaving.

◧◩
2. convex+ch1[view] [source] 2023-11-18 03:08:44
>>convex+C01
Sutskever: "You can call it (a coup), and I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity." Scoop: theinformation.com

https://twitter.com/GaryMarcus/status/1725707548106580255

◧◩◪
3. jojoba+vD1[view] [source] 2023-11-18 05:50:45
>>convex+ch1
The moment they lobotomized their flagship AI chatbot into a particular set of political positions the "benefits of all humanity" were out the window.
◧◩◪◨
4. lijok+1G1[view] [source] 2023-11-18 06:12:11
>>jojoba+vD1
If they hadn’t done that, would they have been able to get to where they are? Goal oriented teams don’t tend to care about something as inconsequential as this
◧◩◪◨⬒
5. Booris+AL1[view] [source] 2023-11-18 07:08:18
>>lijok+1G1
I don't agree with the "noble lie" hypothesis of current AI. That being said I'm not sure why you're couching it that way: they got where they are they got where they are because they spent less time trying to inject safety at a time where capabilities didn't make it unsafe, than their competitors.

Google could have given us GPT-4 if they weren't busy tearing themselves asundre with people convinced a GPT-3 level model was sentient, and now we see OpenAI can't seem to escape that same poison

◧◩◪◨⬒⬓
6. lmm+xX1[view] [source] 2023-11-18 08:59:25
>>Booris+AL1
> Google could have given us GPT-4 if they weren't busy tearing themselves asundre with people convinced a GPT-3 level model was sentient,

Doubt. When was the last time Google showed they had the ability to execute on anything?

◧◩◪◨⬒⬓⬔
7. Booris+Vd2[view] [source] 2023-11-18 11:18:04
>>lmm+xX1
My comment: "Google could execute if not for <insert thing they're doing wrong>"

How is your comment doubting that? Do you have an alternative reason, or you think they're executing and mistyped?

◧◩◪◨⬒⬓⬔⧯
8. lmm+d25[view] [source] 2023-11-19 04:29:22
>>Booris+Vd2
Your comment was "Google could execute if not for <thing extremely specific to this particular field>". Given Google's recent track record I think any kind of specific problem like that is at most a symptom; their dysfunction runs a lot deeper.
◧◩◪◨⬒⬓⬔⧯▣
9. Booris+Qk5[view] [source] 2023-11-19 07:34:33
>>lmm+d25
If you think a power structure that allows people to impose their will in a way that doesn't align with delivering value to your end user is "extremely specific to this particular field", I don't think you've reached the table stakes for examining Google's track record.

There's nothing "specific" about being crippled by people pushing an agenda, you'd think the fact this post was about Sam Altman of OpenAI being fired would make that clear enough.

◧◩◪◨⬒⬓⬔⧯▣▦
10. lmm+CB7[view] [source] 2023-11-19 22:10:51
>>Booris+Qk5
If you were trying to express "a power structure that allows people to impose their will in a way that doesn't align with delivering value to your end user", writing "tearing themselves asundre with people convinced a GPT-3 level model was sentient" was a very poor way to communicate that.
[go to top]