zlacker

[return to "OpenAI board in discussions with Sam Altman to return as CEO"]
1. crop_r+w3[view] [source] 2023-11-18 23:08:00
>>medler+(OP)
The board seems truly incompetent here and looking at the member list it doesn't seem very surprising. A competent board should have asked for legal and professional advice before taking a drastic step like this. Instead the board thought it was a boxing match and tried to deliver a knockout punch before the market closes with blunt language. This might be the most incompetent board for an organisation of this size.
◧◩
2. adharm+pf[view] [source] 2023-11-19 00:05:52
>>crop_r+w3
It was complete amateur hour for the board.

But that aside, how did so many clueless folks who understand neither the technology, or the legalese, nor have enough intelligence/acumen to forsee the immediate impact of their actions happen to be on the board of one of the most important tech companies?

◧◩◪
3. leoh+Bx[view] [source] 2023-11-19 02:06:29
>>adharm+pf
Not many and even fewer if you consider folks that have a good grasp of themselves, their psychology, their emotions — and how they can mislead them, and their heart.

IME most folks at Anthropic, OpenAI or whatever that are freaking out about things never defined the problem well and typically were engaging with highly theoretical models as opposed to the real capabilities of a well-defined, accomplished (or clearly accomplishable) system. It was too triggering for me to consider roles there in the past given that these were typically the folks I knew working there.

Sam may have added a lot of groundedness, but idk ofc bc I wasn’t there.

◧◩◪◨
4. jprete+Ty[view] [source] 2023-11-19 02:14:47
>>leoh+Bx
Is this a way of saying that AI safety is unnecessary?
◧◩◪◨⬒
5. margal+MK[view] [source] 2023-11-19 03:30:28
>>jprete+Ty
It's a way of saying that what has been historically been considered "studying AI safety" in fact bears little relation to real life AIs and what may or may not make them more or less "safe".
[go to top]