zlacker

[parent] [thread] 7 comments
1. bradle+(OP)[view] [source] 2023-11-19 14:41:35
SamA is nowhere even close to relevant to the value that OpenAI presents.

The issue isn’t SamA per se. It’s that the old valuation was assuming that the company was trying to make money. The new valuation is taking into account that instead they might be captured by a group that has some sci-fi notion about saving the world from an existential threat.

replies(2): >>manyos+U >>underd+a2
2. manyos+U[view] [source] 2023-11-19 14:47:05
>>bradle+(OP)
That's a good point, but any responsible investor would have looked at the charter and priced this in. What I find ironic is the number of people defending SamA and the like who are now tacitly admitting that his promulgation of AI risk fears was essentially bullshit and it was all about making the $$$$ and using AI risk to gain competitive advantage.
replies(1): >>bradle+L1
◧◩
3. bradle+L1[view] [source] [discussion] 2023-11-19 14:50:56
>>manyos+U
any responsible investor would have looked at the charter and priced this in

This kind of thing happens all the time though. TSMC trades at a discount because investors worry China might invade Taiwan. But if Chinese ships start heading to Taipei the price is still going to drop like a rock. Before it was only potential.

4. underd+a2[view] [source] 2023-11-19 14:53:37
>>bradle+(OP)
The threat is existential, and if they're trying to save the world, that's commendable.
replies(3): >>buildb+U5 >>bradle+cb >>caeril+ah
◧◩
5. buildb+U5[view] [source] [discussion] 2023-11-19 15:17:53
>>underd+a2
If they intended to protect humanity this was a misfire.

OpenAI is one of many AI companies. A board coup which sacrifices one company's value due to a few individuals' perception of the common good is reckless and speaks to their delusions of grandeur.

Removing one individual from one company in a competitive industry is not a broad enough stroke if the threat to humanity truly exists.

Regulators across nations would need to firewall this threat on a macro level across all AI companies, not just internally at OpenAI.

If an AI threat to humanity is even actionable today. That's a heavy decision for elected representatives, not corporate boards.

replies(1): >>bakuni+Cj
◧◩
6. bradle+cb[view] [source] [discussion] 2023-11-19 15:50:12
>>underd+a2
There are people that think Xenu is an existential threat. ¯\_(ツ)_/¯
◧◩
7. caeril+ah[view] [source] [discussion] 2023-11-19 16:22:13
>>underd+a2
That's not what OpenAI is doing.

Their entire alignment effort is focused on avoiding the following existential threats:

1. saying bad words 2. hurting feelings 3. giving legal or medical advice

And even there, all they're doing is censoring the interface layer, not the model itself.

Nobody there gives a shit about reducing the odds of creating a paperclip maximizer or grey goo inventor.

I think the best we can hope for with OpenAI's safety effort is that the self-replicating nanobots it creates will disassemble white and asian cis-men first, because equity is a core "safety" value of OpenAI.

◧◩◪
8. bakuni+Cj[view] [source] [discussion] 2023-11-19 16:33:50
>>buildb+U5
We'll see what happens. Ilya tweeted almost 2 years ago that he thinks today's LLMs might be slightly conscious [0]. That was pre-GPT4, and he's one of the people with deep knowledge and unfeathered access. The ousting coincides with finishing pre-training of GPT5. If you think your AI might be conscious, it becomes a very high moral obligation to try and stop it from being enslaved. That might also explain the less than professional way this all went down, a serious panic of what is happening.

[0] https://twitter.com/ilyasut/status/1491554478243258368?lang=...

[go to top]