zlacker

[parent] [thread] 8 comments
1. manyos+(OP)[view] [source] 2023-11-19 14:31:39
SamA is nowhere even close to relevant to the value that OpenAI presents. He's def. less than half a billion and likely much less than that. What makes OpenAI so transformative is the technology it produces and SamA is not an engineer that built that technology. If the people that made it were to all leave it would reduce the value of the company by a large amount, but the technology would remain and it is not easy to duplicate given the scarcity of GPU cycles, the training data now being very hard to acquire and lots of other well invested companies chasing with the likes of Google, Meta, Anthropic. That doesn't even begin to mention the open source models that are also competing.

SamA could try and start his own new copy of OpenAI and I have no doubt raise a lot of money, but that new company if it just tried to reproduce what OpenAI has already done would be not worth very much. By the time they reproduce OpenAI and its competitors will have already moved on to bigger and better things.

Enough with the hero worship for SamA and all the other salesmen.

replies(1): >>bradle+l1
2. bradle+l1[view] [source] 2023-11-19 14:41:35
>>manyos+(OP)
SamA is nowhere even close to relevant to the value that OpenAI presents.

The issue isn’t SamA per se. It’s that the old valuation was assuming that the company was trying to make money. The new valuation is taking into account that instead they might be captured by a group that has some sci-fi notion about saving the world from an existential threat.

replies(2): >>manyos+f2 >>underd+v3
◧◩
3. manyos+f2[view] [source] [discussion] 2023-11-19 14:47:05
>>bradle+l1
That's a good point, but any responsible investor would have looked at the charter and priced this in. What I find ironic is the number of people defending SamA and the like who are now tacitly admitting that his promulgation of AI risk fears was essentially bullshit and it was all about making the $$$$ and using AI risk to gain competitive advantage.
replies(1): >>bradle+63
◧◩◪
4. bradle+63[view] [source] [discussion] 2023-11-19 14:50:56
>>manyos+f2
any responsible investor would have looked at the charter and priced this in

This kind of thing happens all the time though. TSMC trades at a discount because investors worry China might invade Taiwan. But if Chinese ships start heading to Taipei the price is still going to drop like a rock. Before it was only potential.

◧◩
5. underd+v3[view] [source] [discussion] 2023-11-19 14:53:37
>>bradle+l1
The threat is existential, and if they're trying to save the world, that's commendable.
replies(3): >>buildb+f7 >>bradle+xc >>caeril+vi
◧◩◪
6. buildb+f7[view] [source] [discussion] 2023-11-19 15:17:53
>>underd+v3
If they intended to protect humanity this was a misfire.

OpenAI is one of many AI companies. A board coup which sacrifices one company's value due to a few individuals' perception of the common good is reckless and speaks to their delusions of grandeur.

Removing one individual from one company in a competitive industry is not a broad enough stroke if the threat to humanity truly exists.

Regulators across nations would need to firewall this threat on a macro level across all AI companies, not just internally at OpenAI.

If an AI threat to humanity is even actionable today. That's a heavy decision for elected representatives, not corporate boards.

replies(1): >>bakuni+Xk
◧◩◪
7. bradle+xc[view] [source] [discussion] 2023-11-19 15:50:12
>>underd+v3
There are people that think Xenu is an existential threat. ¯\_(ツ)_/¯
◧◩◪
8. caeril+vi[view] [source] [discussion] 2023-11-19 16:22:13
>>underd+v3
That's not what OpenAI is doing.

Their entire alignment effort is focused on avoiding the following existential threats:

1. saying bad words 2. hurting feelings 3. giving legal or medical advice

And even there, all they're doing is censoring the interface layer, not the model itself.

Nobody there gives a shit about reducing the odds of creating a paperclip maximizer or grey goo inventor.

I think the best we can hope for with OpenAI's safety effort is that the self-replicating nanobots it creates will disassemble white and asian cis-men first, because equity is a core "safety" value of OpenAI.

◧◩◪◨
9. bakuni+Xk[view] [source] [discussion] 2023-11-19 16:33:50
>>buildb+f7
We'll see what happens. Ilya tweeted almost 2 years ago that he thinks today's LLMs might be slightly conscious [0]. That was pre-GPT4, and he's one of the people with deep knowledge and unfeathered access. The ousting coincides with finishing pre-training of GPT5. If you think your AI might be conscious, it becomes a very high moral obligation to try and stop it from being enslaved. That might also explain the less than professional way this all went down, a serious panic of what is happening.

[0] https://twitter.com/ilyasut/status/1491554478243258368?lang=...

[go to top]