zlacker

[return to "OpenAI departures: Why can’t former employees talk?"]
1. thorum+Bu[view] [source] 2024-05-17 23:10:57
>>fnbr+(OP)
Extra respect is due to Jan Leike, then:

https://x.com/janleike/status/1791498174659715494

◧◩
2. a_wild+Xv[view] [source] 2024-05-17 23:24:41
>>thorum+Bu
I think superalignment is absurd, and model "safety" is the modern AI company's "think of the children" pearl clutching pretext to justify digging moats. All this after sucking up everyone's copyright material as fair use, then not releasing the result, and profiting off it.

All due respect to Jan here, though. He's being (perhaps dangerously) honest, genuinely believes in AI safety, and is an actual research expert, unlike me.

◧◩◪
3. xpe+kQ[view] [source] 2024-05-18 04:02:13
>>a_wild+Xv
> I think superalignment is absurd, and model "safety" is the modern AI company's "think of the children" pearl clutching pretext to justify digging moats. All this after sucking up everyone's copyright material as fair use, then not releasing the result, and profiting off it.

How can I be confident you aren't committing the fallacy of collecting a bunch of events and saying that is sufficient to serve as a cohesive explanation? No offense intended, but the comment above has many of the qualities of a classic rant.

If I'm wrong, perhaps you could elaborate? If I'm not wrong, maybe you could reconsider?

Don't forget that alignment research has existed longer than OpenAI. It would be a stretch to claim that the original AI safety researchers were using the pretexts you described -- I think it is fair to say they were involved because of genuine concern, not because it was a trendy or self-serving thing to do.

Some of those researchers and people they influenced ended up at OpenAI. So it would be a mistake or at least an oversimplification to claim that AI safety is some kind of pretext at OpenAI. Could it be a pretext for some people in the organization, to some degree? Sure, it could. But is it a significant effect? One that fits your complex narrative, above? I find that unlikely.

Making sense of an organization's intentions requires a lot of analysis and care, due to the combination of actors and varying influence.

There are simpler, more likely explanations, such as: AI safety wasn't a profit center, and over time other departments in OpenAI got more staff, more influence, and so on. This is a problem, for sure, but there is no "pearl clutching pretext" needed for this explanation.

[go to top]