zlacker

[return to "Jan Leike Resigns from OpenAI"]
1. Pet_An+N11[view] [source] 2024-05-15 14:13:54
>>Jimmc4+(OP)
In case people haven't noticed, this is the second resignation in as many days.

>>40361128

◧◩
2. pfist+W41[view] [source] 2024-05-15 14:28:10
>>Pet_An+N11
I have noticed, and I am concerned that they were the leaders of the Superalignment team.
◧◩◪
3. transc+N61[view] [source] 2024-05-15 14:37:55
>>pfist+W41
On the other hand, they clearly they weren’t concerned enough about the issue to continue working on it.
◧◩◪◨
4. HarHar+No1[view] [source] 2024-05-15 15:58:53
>>transc+N61
The Anthropic folk were concerned enough that they left, and are indeed continuing to work on it [AI safety].

Now, we've have the co-leads of the super-alignment/safety team leaving too.

Certainly not a good look for OpenAI.

There really doesn't seem to be much of a mission left at OpenAI - they have a CEO giving off used car salesman vibes who recently mentioned considering allowing their AI to generate porn, and now releasing a flirty AI-girlfriend as his gift to humanity.

◧◩◪◨⬒
5. koe123+Ci3[view] [source] 2024-05-16 07:17:11
>>HarHar+No1
On the other hand, Anthropic founders reason for leaving also yielded them angle to start a new successful company, now worth 9+ figures. Given that I’m not sure I’ll take their concerns about the state of OpenAI at face value.
◧◩◪◨⬒⬓
6. HarHar+CP3[view] [source] 2024-05-16 13:14:56
>>koe123+Ci3
I've watched all the Dario/Daniela interviews I can find, and I guess that's a fair way of putting it. It seems they genuinely felt (& were) constrained at OpenAI in being able to follow a safety-first agenda, and have articulated it as being "the SF way" to start a new company when you have a new idea (so maybe more cultural than looking for an angle), as well as being able to follow a dream of working together.

From what we've seen of OpenAI, it does seem that any dissenting opinions will be bulldozered by Altman's product and money making focus. The cult-like "me too" tweets that huge swathes of the staff periodically send seems highly indicative of the culture.

Anthropic does seem genuine about their safety-first agenda, and strategy to lead by example, and have been successful in getting others to follow their safe scaling principles, AI constitution (cf OpenAI's new "Model Spec") and open research into safety/interpretability. This emphasis on safe and steerable models seems to have given them an advantage in corporate adoption.

[go to top]