zlacker

[return to "Jan Leike's OpenAI departure statement"]
1. ldayle+uf[view] [source] 2024-05-17 17:39:20
>>jnnnth+(OP)
Thank you Jan for your work and your courage to act and speak candidly about these important topics.

Open question to HN: to your knowledge/experience which AGI-building companies or projects have a culture most closely aligned with keeping safety, security, privacy, etc. as high a priority as “winning the race” on this new frontier land-grab? I’d love to find and support those teams over the teams that spend more time focused on getting investment and marketshare.

◧◩
2. Atotal+xh[view] [source] 2024-05-17 17:53:37
>>ldayle+uf
Anthropic would probably be the closest to this.

AFAIK they are still for profit, but they split from OpenAI because they disagreed with the lack of safety culture from OpenAI.

This is also seen in their llms which are less capable due to their safety limitations

◧◩◪
3. static+ql1[view] [source] 2024-05-18 05:12:38
>>Atotal+xh
Claude Opus doesn't appear to have more safety limitations than ChatGPT.

The older Claude 2.1 on the other hand was so ridiculously incapable of functioning due to a safety first design I'm guessing it inspired the goodie 2 parody AI. https://www.goody2.ai/

[go to top]