zlacker

[parent] [thread] 14 comments
1. ilikeh+(OP)[view] [source] 2023-11-22 06:15:42
OAI looks stronger than ever. The untrustworthy bits that caused all this instability over the last 5 days have been ditched into the sea. Care to expand on your claim?
replies(2): >>neta13+W >>6gvONx+H1
2. neta13+W[view] [source] 2023-11-22 06:21:38
>>ilikeh+(OP)
Please explain your claim as well. I don’t see how this company looks stronger than ever, more like a clown company
replies(3): >>TapWat+F1 >>ilikeh+p2 >>GreedC+Z7
◧◩
3. TapWat+F1[view] [source] [discussion] 2023-11-22 06:26:33
>>neta13+W
They got rid of the clowns though. They went from having a board with lightweights and insiders to what at least initially is a strong initial 3.
4. 6gvONx+H1[view] [source] 2023-11-22 06:26:46
>>ilikeh+(OP)
> The untrustworthy bits that caused all this instability over the last 5 days have been ditched into the sea

This whole thing started with Altman pushing a safety oriented non-profit into a tense contradiction (edit: I mean the 2019-2022 gpt3/chatgpt for-profit stuff that led to all the Anthropic people leaving). The most recent timeline was

- Altman tries to push out another board member

- That board member escalates by pushing Altman out (and Brockman off the board)

- Altman's side escalates by saying they'll nuke the company

Altman's side won, but how can we say that his side didn't cause any of this instability?

replies(2): >>ilikeh+K2 >>WendyT+M3
◧◩
5. ilikeh+p2[view] [source] [discussion] 2023-11-22 06:31:39
>>neta13+W
I may have been overly eager in my comment because the big bad downside of the new board is none of the founders are on it. I hope the current membership sees reason and fixes this issue.

But I said this because: They've retained the entire company, reinstated its founder as CEO, and replaced an activist clown board with a professional, experienced, and possibly* unified one. Still remains to be seen how the board membership and overall org structure changes, but I have much more trust in the current 3 members steering OpenAI toward long-term success.

replies(1): >>MVisse+67
◧◩
6. ilikeh+K2[view] [source] [discussion] 2023-11-22 06:33:36
>>6gvONx+H1
> Altman tries to push out another board member

That event wasn't some unprovoked start of this history.

> That board member escalates by pushing Altman out (and Brockman off the board)

and the entire company retaliated. Then this board member tried to sell the company to a competitor who refused. In the meantime the board went through two interim CEOs who refused to play along with this scheme. In the meantime one of the people who voted to fire the CEO regretted it publicly within 24 hours. That's a clown car of a board. It reflects the quality of most non-profit boards but not of organizations that actually execute well.

replies(1): >>emptys+V7
◧◩
7. WendyT+M3[view] [source] [discussion] 2023-11-22 06:39:55
>>6gvONx+H1
By recognizing that it didn't "start" with Altman trying to push out another board member, it started when that board member published a paper trashing the company she's on the board of, without speaking to the CEO of that company first, or trying in any way to affect change first.
replies(2): >>6gvONx+G5 >>croes+M8
◧◩◪
8. 6gvONx+G5[view] [source] [discussion] 2023-11-22 06:52:16
>>WendyT+M3
I edited my comment to clarify what I meant. The start was him pushing to move fast and break things in the classic YC kind of way. And it's BS to say that she didn't speak to the CEO or try to affect change first. The safety camp inside openai has been unsuccessfully trying to push him to slow down for years.

See this article for all that context (>>38341399 ) because it sure didn't start with the paper you referred to either.

replies(1): >>WendyT+d7
◧◩◪
9. MVisse+67[view] [source] [discussion] 2023-11-22 07:00:54
>>ilikeh+p2
If by “long-term-success” you mean a capitalistic lap-dog of microsoft, I’ll agree.

It seems that the safety team within OpenAI lost. My biggest fear with this whole AI thing is hostile takeover, and openAI was best positioned to at least do an effort to prevent that. Now, I’m not so sure anymore.

◧◩◪◨
10. WendyT+d7[view] [source] [discussion] 2023-11-22 07:01:47
>>6gvONx+G5
Your "most recent" timeline is still wrong, and while yes the entire history of OpenAI did not begin with the paper I'm referencing, it is what started this specific fracas, the one where the board voted to oust Sam Altman.

It was a classic antisocial academic move; all she needed to do was talk to Altman, both before and after writing the paper. It's incredibly easy to do that, and her not doing it is what began the insanity.

She's gone now, and Altman remains, substantially because she didn't know how to pick up a phone and interact with another human being. Who knows, she might have even been successful at her stated goal, of protecting AI, had she done even the most basic amount of problem solving first. She should not have been on this board, and I hope she's learned literally anything from this about interacting with people, though frankly I doubt it.

replies(1): >>6gvONx+l8
◧◩◪
11. emptys+V7[view] [source] [discussion] 2023-11-22 07:05:56
>>ilikeh+K2
Something that's been fairly consistent here on HN throughout the debacle has been an almost fanatical defense of the board's actions as justified.

The board was incompetent. It will go down in the history books as one of the biggest blunders of a board in history.

If you want to take drastic action, you consult with your biggest partner keeping the lights on before you do so. Helen Toner and Tasha McCauley had no business being on this board. Even if you had safety concerns in mind, you don't bypass everyone else with a stake in the future of your business because you're feeling petulant.

◧◩
12. GreedC+Z7[view] [source] [discussion] 2023-11-22 07:07:14
>>neta13+W
It was a clown board running an awesome company.

They fixed the glitch.

◧◩◪◨⬒
13. 6gvONx+l8[view] [source] [discussion] 2023-11-22 07:09:53
>>WendyT+d7
Honestly, I just don't believe that she didn't talk to Altman about her concerns. I'd believe that she didn't say "I'm publishing a paper about it now" but I can't believe she didn't talk to him about her concerns during the last 4+ years that it's been a core tension at the company.
replies(1): >>WendyT+e9
◧◩◪
14. croes+M8[view] [source] [discussion] 2023-11-22 07:12:16
>>WendyT+M3
>trashing the company

So pointing out risks is trashing the company.

◧◩◪◨⬒⬓
15. WendyT+e9[view] [source] [discussion] 2023-11-22 07:14:48
>>6gvONx+l8
That's what I mean; she should have discussed the paper and its contents specifically with Altman, and easily could have. It's a hugely damaging thing to have your own board member come out critically against your company. It's doubly so when it blindsides the CEO.

She had many, many other options available to her that she did not take. That was a grave mistake and she paid for it.

"But what about academic integrity?" Yes! That's why this whole idea was problematic from the beginning. She can't be objective and fulfill her role as board member. Her role at Georgetown was in direct conflict with her role on the OpenAI board.

[go to top]