zlacker

[parent] [thread] 6 comments
1. transc+(OP)[view] [source] 2024-05-15 14:37:55
On the other hand, they clearly they weren’t concerned enough about the issue to continue working on it.
replies(4): >>dontup+k2 >>llamai+U3 >>scarmi+V3 >>HarHar+0i
2. dontup+k2[view] [source] 2024-05-15 14:48:36
>>transc+(OP)
One could argue that at this point openai is being Extended and Embraced by Microsoft and is unlikely to have much autonomy or groundbreaking impact one way or another.
3. llamai+U3[view] [source] 2024-05-15 14:55:24
>>transc+(OP)
Ah yes, a scientist refusing to work on the hydrogen bomb couldn't have been all that concerned about it.
4. scarmi+V3[view] [source] 2024-05-15 14:55:26
>>transc+(OP)
If your ostensible purpose is being sidelined by decision makers, trying to fight back is often a good option, but sometimes you fail. Admitting failure and focusing on other approaches is the right choice at that point.
5. HarHar+0i[view] [source] 2024-05-15 15:58:53
>>transc+(OP)
The Anthropic folk were concerned enough that they left, and are indeed continuing to work on it [AI safety].

Now, we've have the co-leads of the super-alignment/safety team leaving too.

Certainly not a good look for OpenAI.

There really doesn't seem to be much of a mission left at OpenAI - they have a CEO giving off used car salesman vibes who recently mentioned considering allowing their AI to generate porn, and now releasing a flirty AI-girlfriend as his gift to humanity.

replies(1): >>koe123+Pb2
◧◩
6. koe123+Pb2[view] [source] [discussion] 2024-05-16 07:17:11
>>HarHar+0i
On the other hand, Anthropic founders reason for leaving also yielded them angle to start a new successful company, now worth 9+ figures. Given that I’m not sure I’ll take their concerns about the state of OpenAI at face value.
replies(1): >>HarHar+PI2
◧◩◪
7. HarHar+PI2[view] [source] [discussion] 2024-05-16 13:14:56
>>koe123+Pb2
I've watched all the Dario/Daniela interviews I can find, and I guess that's a fair way of putting it. It seems they genuinely felt (& were) constrained at OpenAI in being able to follow a safety-first agenda, and have articulated it as being "the SF way" to start a new company when you have a new idea (so maybe more cultural than looking for an angle), as well as being able to follow a dream of working together.

From what we've seen of OpenAI, it does seem that any dissenting opinions will be bulldozered by Altman's product and money making focus. The cult-like "me too" tweets that huge swathes of the staff periodically send seems highly indicative of the culture.

Anthropic does seem genuine about their safety-first agenda, and strategy to lead by example, and have been successful in getting others to follow their safe scaling principles, AI constitution (cf OpenAI's new "Model Spec") and open research into safety/interpretability. This emphasis on safe and steerable models seems to have given them an advantage in corporate adoption.

[go to top]