zlacker

[parent] [thread] 4 comments
1. underd+(OP)[view] [source] 2023-11-19 14:53:37
The threat is existential, and if they're trying to save the world, that's commendable.
replies(3): >>buildb+K3 >>bradle+29 >>caeril+0f
2. buildb+K3[view] [source] 2023-11-19 15:17:53
>>underd+(OP)
If they intended to protect humanity this was a misfire.

OpenAI is one of many AI companies. A board coup which sacrifices one company's value due to a few individuals' perception of the common good is reckless and speaks to their delusions of grandeur.

Removing one individual from one company in a competitive industry is not a broad enough stroke if the threat to humanity truly exists.

Regulators across nations would need to firewall this threat on a macro level across all AI companies, not just internally at OpenAI.

If an AI threat to humanity is even actionable today. That's a heavy decision for elected representatives, not corporate boards.

replies(1): >>bakuni+sh
3. bradle+29[view] [source] 2023-11-19 15:50:12
>>underd+(OP)
There are people that think Xenu is an existential threat. ¯\_(ツ)_/¯
4. caeril+0f[view] [source] 2023-11-19 16:22:13
>>underd+(OP)
That's not what OpenAI is doing.

Their entire alignment effort is focused on avoiding the following existential threats:

1. saying bad words 2. hurting feelings 3. giving legal or medical advice

And even there, all they're doing is censoring the interface layer, not the model itself.

Nobody there gives a shit about reducing the odds of creating a paperclip maximizer or grey goo inventor.

I think the best we can hope for with OpenAI's safety effort is that the self-replicating nanobots it creates will disassemble white and asian cis-men first, because equity is a core "safety" value of OpenAI.

◧◩
5. bakuni+sh[view] [source] [discussion] 2023-11-19 16:33:50
>>buildb+K3
We'll see what happens. Ilya tweeted almost 2 years ago that he thinks today's LLMs might be slightly conscious [0]. That was pre-GPT4, and he's one of the people with deep knowledge and unfeathered access. The ousting coincides with finishing pre-training of GPT5. If you think your AI might be conscious, it becomes a very high moral obligation to try and stop it from being enslaved. That might also explain the less than professional way this all went down, a serious panic of what is happening.

[0] https://twitter.com/ilyasut/status/1491554478243258368?lang=...

[go to top]