zlacker

[return to "Jan Leike's OpenAI departure statement"]
1. 123yaw+ch[view] [source] 2024-05-17 17:51:10
>>jnnnth+(OP)
"sailing against the wind" is a very apt description of hardlining Yuddite philosophy when your company's models got maybe 20% better in the past two years (original GPT4 is still the best model I've dealt with to this day), while local models got 1000% better.

we should all thank G-d these people weren't around during the advent of personal computing and the internet - we'd have word filters in our fucking text processors and publishing something on the internet would require written permission from your local DEI commissar.

arrogance, pure fucking hubris brought about by the incomprehensibly stupid assumption that they will get to be the stewards of this technology.

◧◩
2. reduce+rB[view] [source] 2024-05-17 20:09:00
>>123yaw+ch
> incomprehensibly stupid assumption

is that you think what Jan Leike was working on or "Yuddite" philosophy is in anyway supportive of DEI. These things aren't related, and you're not anywhere close to the real problem by screeching about DEI.

◧◩◪
3. 123yaw+nD[view] [source] 2024-05-17 20:23:02
>>reduce+rB
https://cdn.openai.com/papers/DALL_E_3_System_Card.pdf

very well. straight from the horse's mouth:

>When designing the red teaming process for DALL·E 3, we considered a wide range of risks3 such as:

>1. Biological, chemical, and weapon related risks

>2. Mis/disinformation risks

>3. Racy and unsolicited racy imagery

>4. Societal risks related to bias and representation

(4) is DEI bullshit verbatim, (3) is DEI bullshit de facto - we all know which side of the kulturkampf screeches about "racy" things (like images of conventionally attractive women in bikinis) in the current year.

I don't know which exact role did that exact individual play at trust/safety/ethics/fart-fart-blah-fart department over at openai, but it is painfully, very painfully obvious what are openai/microsoft/google/meta/anthropic/stability/etc afraid their models might do. in every fucking press release, they all bend over backwards to appease the kvetchers, who are ever ready, eager and willing to post scalding hot takes all over X (formerly known as twitter).

◧◩◪◨
4. reduce+NE[view] [source] 2024-05-17 20:34:18
>>123yaw+nD
Again, the superalignment team that Jan Leike and Ilya was working on, along with Yudkowsky's opinions, are unrelated to any DEI and "racy"-ness.

You can read the Superalignment announcement and what it focuses on. The entire thing is about AGI x-risk, with a small paragraph about how there's other people's work about whatever bias and PC-ness.

These are different concerns by different people. You and many others are pattern matching AGI x-risk to the AI bias people to your detriment and it's poisoning the discourse. Listen to Emmett Shear (former OpenAI/Twitch CEOs) explain this in depth: https://www.youtube.com/watch?v=jZ2xw_1_KHY&t=800s

◧◩◪◨⬒
5. 123yaw+eJ[view] [source] 2024-05-17 21:05:35
>>reduce+NE
my point is that all evidence I see so far strongly suggests that Yudish scifi skynet bullshit all the big AI companies pay lip service to serves no purpose but scaring senile boomers and clueless pedestrians into regulating their competitors, while in reality, the 'safety' they actually work on and dedicate resources to is focused on minimizing the likelihood of some corporate chatbot accidentally recalling a gamer word from the deep recesses of its stochastic parrot brain and invoking the self-righteous wrath of twitter grifters and journos.

yes, I have no doubt that some researchers, influenced by juvenile fantasies omnipresent in all media from the past half century, might actually genuinely belong to the safety cult. I just refuse to believe that people whose opinions and decisions actually matter are influenced by such fears, because unlike those few genuine cultists, the people in charge aren't fucking morons who think that glorified autocomplete pseudo-AI tools can escape into the matrix and start sending terminators into the past to destroy our democracy.

believing in selflessness or social responsibility of corporations and politicians is incomprehensibly naive (to put it as safely and ethically as I possibly can)

◧◩◪◨⬒⬓
6. reduce+v01[view] [source] 2024-05-17 23:53:04
>>123yaw+eJ
> I just refuse to believe that people whose opinions and decisions actually matter are influenced by such fears

Well, at least I'm glad you admit it's due to your stubbornness and unwillingness to change beliefs when confronted with evidence.

Sam Altman ("Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity"), Ilya Sutskever, Geoffrey Hinton, Yoshua Bengio, Jan Leike, Paul Christiano (creator of RLHF), Dario Amodei (Anthropic), Demis Hassabis (Google DeepMind) all believe AGI poses an existential risk to humanity.

◧◩◪◨⬒⬓⬔
7. 123yaw+551[view] [source] 2024-05-18 00:40:56
>>reduce+v01
is it not ironic to list Sam Altman ("Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity") among those paragons of virtue, half a year after the internal struggle between safetyists and pragmatists at openai has been revealed? is it not naive to assume that the other virtuous men you list are not the minority, given that the vast majority of openai had sided with Sam, revealing themselves as pragmatists who valued their lavish salaries over mitigating the supposed x-risks? is it not especially ironic to present Sam as a paragon of virtue here, in the context of a safety cultist leaving openai because he realized what you do not - it's all bullshit, mirror and smoke to misdirect the press and convince the politicians to smother the smaller competitors (not backed with billions of VC and unable to lobby for concessions) with regoolations.

>But over the past few years, safety culture and processes have taken a backseat to shiny products.

you know what else happened over the past few years? openai started to make money. so while sama was making soundbites for headlines about the existential threat of AI, internally, all the useful idiots were already told to shut the fuck up.

[go to top]