zlacker

[parent] [thread] 39 comments
1. hn_thr+(OP)[view] [source] 2023-11-18 23:24:32
Dude, everyone already thinks the board did a crazy ill-advised thing. They're about to be the board of like a 5 person or so company if they double down and commit.

To be honest I hate takes like yours, where people think that acknowledging a mistake (even a giant mistake) is a sign of weakness. A bigger sign of weakness in my opinion is people who commit to a shitty idea just because they said it first, despite all evidence to the contrary.

replies(11): >>johnfn+41 >>skygaz+R2 >>Aeolun+b3 >>LewisV+M4 >>015a+w9 >>balls1+Ec >>rfrey+Wc >>Pheoni+Tn >>moogly+Lp >>wwtrv+Pr >>roguas+JZ1
2. johnfn+41[view] [source] 2023-11-18 23:30:15
>>hn_thr+(OP)
> To be honest I hate takes like yours, where people think that acknowledging a mistake (even a giant mistake) is a sign of weakness.

The weakness was the first decision; it’s already past the point of deciding if the board is a good steward of OpenAI or not. Sometimes backtracking can be a point of strength, yes, but in this case waffling just makes them look even dumber.

3. skygaz+R2[view] [source] 2023-11-18 23:39:37
>>hn_thr+(OP)
I’m not advocating people double down on stupid, or that correcting your mistakes is bad optics. I’m simply saying they’re “increasingly revealing” pre-existing unfitness at each ham-fisted step. I think our increase in knowledge of their foolishness is a good thing. And often correcting a situation isn’t the same as undoing it, because undoing is often not possible or has its own consequences. I do appreciate your willingness to let them grow into their responsibilities despite it all — that’s a rare charity extended to an incompetent board.
replies(1): >>hn_thr+IG
4. Aeolun+b3[view] [source] 2023-11-18 23:41:37
>>hn_thr+(OP)
Depends entirely on how you do it. You can do something and backtrack in a shitty way too.

If they wanted to show they’re committed to backtracking they could resign themselves.

Now it sounds more like they want to have their cake and eat it.

5. LewisV+M4[view] [source] 2023-11-18 23:50:32
>>hn_thr+(OP)
> To be honest I hate takes like yours, where people think that acknowledging a mistake (even a giant mistake) is a sign of weakness. A bigger sign of weakness in my opinion is people who commit to a shitty idea just because they said it first, despite all evidence to the contrary.

Lmfao you're joking if you think they "realized their mistake" and are now atoning.

This is 99% from Microsoft & OpenAI's other investors.

replies(1): >>jacque+zb
6. 015a+w9[view] [source] 2023-11-19 00:11:46
>>hn_thr+(OP)
Bad take. Not "everyone" feels that what they did was wrong. We don't have insight into what's going on internally. Optics matter; the division over their decision means that its definitionally non-obvious what the correct path forward is; or, that there isn't one correct path, but multiple reasonable paths. To admit a mistake of this magnitude is to admit that you're either so unprincipled that your mind can be changed at a whim; or that you didn't think through the decision enough preemptively. These are absolutely signs of weakness in leadership.
replies(4): >>peyton+hc >>vikram+Ed >>tick_t+We >>hn_thr+UF
◧◩
7. jacque+zb[view] [source] [discussion] 2023-11-19 00:22:45
>>LewisV+M4
> This is 99% from Microsoft & OpenAI's other investors.

Exactly. You can bet there have been some very pointed exchanges about this.

replies(1): >>merpnd+Hc
◧◩
8. peyton+hc[view] [source] [discussion] 2023-11-19 00:27:22
>>015a+w9
Satya is “furious.” What’s reasonable about pissing off a guy who can pull the plug? I don’t think it’s definitionally non-obvious whether to take that risk.
replies(2): >>option+Jd >>no_wiz+5j
9. balls1+Ec[view] [source] 2023-11-19 00:30:16
>>hn_thr+(OP)
This isn’t a shitty idea. The board fired it’s CEO and the next day is apparently asking him to come back.

At this point, I don’t care how it resolves—the people who made that decision should be removed for sheer incompetence.

◧◩◪
10. merpnd+Hc[view] [source] [discussion] 2023-11-19 00:30:25
>>jacque+zb
Yeah, Satya likely just hired a thousand new lawyers just to sue OpenAI for being idiots.
replies(1): >>jacque+se
11. rfrey+Wc[view] [source] 2023-11-19 00:31:40
>>hn_thr+(OP)
You're not wrong, but in this case not enough time has emerged for the situation to change or for new facts to emerge. It's been a bit over a day. All that a flip-flop in that short timeframe does is indicate that the board did not fully think through their actions. And taking a step like this without careful consideration is a sign of incompetence.
◧◩
12. vikram+Ed[view] [source] [discussion] 2023-11-19 00:35:59
>>015a+w9
Whether or not you agree with the decision they obviously screwed up the execution something awful. This is humiliating for them and honestly setting altman free like they did was probably the permanent end of AI safety. Just take someone with all the connections and the ability to raise billions of dollars overnight and set them free without any of the shackles of AI ethics people in a way that makes all the people with money want to support him? That's how you get skynet
replies(1): >>015a+dS
◧◩◪
13. option+Jd[view] [source] [discussion] 2023-11-19 00:36:17
>>peyton+hc
Yeah, he can be furious all he wants but he is not getting the OpenAI he used to have back. It’s either Sam + Greg now or Ilya. All 3 are irreplaceable.
◧◩◪◨
14. jacque+se[view] [source] [discussion] 2023-11-19 00:41:55
>>merpnd+Hc
I so wish I could be a fly on the wall in all this. There's got to be some very interesting moves and countermoves. This isn't over yet.
◧◩
15. tick_t+We[view] [source] [discussion] 2023-11-19 00:44:53
>>015a+w9
> Bad take. Not "everyone" feels that what they did was wrong.

But everyone important does so who cares about the rest?

replies(1): >>no_wiz+tf
◧◩◪
16. no_wiz+tf[view] [source] [discussion] 2023-11-19 00:48:42
>>tick_t+We
You mean the “the rest” as in the people who execute on the company vision?

It’s really dismissive toward the rank and file to think that they don’t matter at all.

replies(2): >>threes+Sh >>hn_thr+cE
◧◩◪◨
17. threes+Sh[view] [source] [discussion] 2023-11-19 01:03:00
>>no_wiz+tf
a) The company vision up until this point included commercial products.

b) Altman personally hired many of the rank and file.

c) OpenAI doesn't exist with customers, investors or partners. And in this one move the board has alienated all three.

replies(1): >>no_wiz+si
◧◩◪◨⬒
18. no_wiz+si[view] [source] [discussion] 2023-11-19 01:08:30
>>threes+Sh
I seriously doubt customers or (most) partners care about this. I have yet to hear of a single customer or partner leave the service, and I do not believe it to be likely. Simply, unless they shut down their offerings on Monday they will have their customers.

Investors care, but if new management can keep the gravy track, they ultimately won’t care either.

Companies pivot all the time. Who is to say the new vision isn’t favored by the majority of the company?

replies(3): >>threes+Uo >>wwtrv+Ks >>kcb+Ew
◧◩◪
19. no_wiz+5j[view] [source] [discussion] 2023-11-19 01:12:32
>>peyton+hc
Last I checked he only had 49% of the company.

I also feel, that they can patch relationships, Satya may be upset now but will he continue to be upset on Monday?

It needs to play out more before we know, I think. They need to pitch their plan to outside stakeholders now

replies(1): >>discor+2o
20. Pheoni+Tn[view] [source] 2023-11-19 01:49:54
>>hn_thr+(OP)
> Dude, everyone already thinks the board did a crazy ill-advised thing.

I've honestly never had more hope for this industry than when it was apparent that Altman was pushed out by engineering for forgoing the mission to create world changing products in favor of the usual mindless cash grab.

The idea that people with a passion for technical excellence and true innovation might be able to steer OpenAI to do something amazing was almost unbelievable.

That's why I'm not too surprised to see that it probably won't really play out, and likely will end up in OpenAI turning even faster into yet another tech company worried exclusively with next quarters revenue.

◧◩◪◨
21. discor+2o[view] [source] [discussion] 2023-11-19 01:51:12
>>no_wiz+5j
Which other company will give them the infra/compute they need when 49% of the profitable part has been eaten up?
replies(2): >>threes+op >>no_wiz+Kr
◧◩◪◨⬒⬓
22. threes+Uo[view] [source] [discussion] 2023-11-19 01:57:19
>>no_wiz+si
The fact that this happened so soon after Developer Day is a clear signal that the board wasn't happy with that direction.

Which is why every developer/partner including Microsoft is going to be watching this situation unfold with trepidation.

And I don't know how you can "keep the gravy track" when you want the company to move away from commercialisation.

◧◩◪◨⬒
23. threes+op[view] [source] [discussion] 2023-11-19 01:59:55
>>discor+2o
And how will they survive if Microsoft/SamAi ends up building a competitor ?

Microsoft could run the entire business as a loss just to attract developers to Azure.

replies(1): >>no_wiz+gr
24. moogly+Lp[view] [source] 2023-11-19 02:01:31
>>hn_thr+(OP)
"When faced with multiple options, the most important thing is to just pick one and stick with it."

"Disagree and commit."

- says every CEO these days

◧◩◪◨⬒⬓
25. no_wiz+gr[view] [source] [discussion] 2023-11-19 02:10:40
>>threes+op
That assumes Altman competitor can outpace and outclass OpenAI and maybe it can. I know Anthropic came about from earlier disagreements and that didn’t slow OpenAIs innovation pace, certainly.

Everything just assumes that without Sam they’re worse off.

But what if, my gosh, they aren’t? What if innovation accelerates?

My point being is it’s useless to speculate that Altman starting a new business competing with OpenAI will be successful inherently. There’s more to it than that

replies(3): >>qwytw+1v >>palebl+8F >>int_19+IW
◧◩◪◨⬒
26. no_wiz+Kr[view] [source] [discussion] 2023-11-19 02:13:38
>>discor+2o
First it remains to be seen if Microsoft is going to do something drastic.

I also suspect they could very well secure this kind of agreement from another company that would be happy to play ball for access to OpenAI tech. Perhaps Amazon for instance, who’s AI attempts since Alexa have been lackluster

27. wwtrv+Pr[view] [source] 2023-11-19 02:14:22
>>hn_thr+(OP)
> is a sign of weakness

It's often a sign of incompetence though. Or rather a confirmation of it.

◧◩◪◨⬒⬓
28. wwtrv+Ks[view] [source] [discussion] 2023-11-19 02:18:23
>>no_wiz+si
> I have yet to hear of a single customer or partner leave the service

Which doesen't mean a lot. Of course they'd wait for this to play out before committing to anything.

> but if new management can keep the gravy track

I got the vague impression that this whole thing was partially about stopping the gravy train? In any case Microsoft won't be too happy about being entirely blindsided (if that was the case) and probably won't really trust the new management.

◧◩◪◨⬒⬓⬔
29. qwytw+1v[view] [source] [discussion] 2023-11-19 02:32:10
>>no_wiz+gr
> Everything just assumes that without Sam they’re worse off.

But it's not just him is it?

replies(1): >>no_wiz+Pw
◧◩◪◨⬒⬓
30. kcb+Ew[view] [source] [discussion] 2023-11-19 02:42:01
>>no_wiz+si
The new management has declared that their primary goal in all this was to stop the gravy track.
replies(1): >>no_wiz+IB
◧◩◪◨⬒⬓⬔⧯
31. no_wiz+Pw[view] [source] [discussion] 2023-11-19 02:42:55
>>qwytw+1v
Sure, I suppose not, but they aren’t losing everyone en masse. Simply Altman supporters so far.

I think a wait and see approach is better. I think we had some inner politics spill public because Altman needs to the public pressure to get his job back, if I was speculating

◧◩◪◨⬒⬓⬔
32. no_wiz+IB[view] [source] [discussion] 2023-11-19 03:14:01
>>kcb+Ew
I don’t think there has been a formal announcement on the new direction yet
◧◩◪◨
33. hn_thr+cE[view] [source] [discussion] 2023-11-19 03:34:10
>>no_wiz+tf
> It’s really dismissive toward the rank and file to think that they don’t matter at all.

I had the exact opposite take. If I were rank and file I'd be totally pissed how this all went down, and the fact that there are really only 2 possible outcomes:

1. Altman and Brockman announce another company (which has kind of already happened), so basically every "rank and file" person is going to have to decide which "War of the Roses" team they want to be on.

2. Altman comes back to OpenAI, which in any case will result in tons of time turmoil and distraction (obviously already has), when most rank and file people just want to do their jobs.

◧◩◪◨⬒⬓⬔
34. palebl+8F[view] [source] [discussion] 2023-11-19 03:40:03
>>no_wiz+gr
> Everything just assumes that without Sam they’re worse off. > > But what if, my gosh, they aren’t? What if innovation accelerates?

It reads like they ousted him because they wanted to slow the pace down, so by design and intent it would seem unlikely innovation would accelerate. Which seems doubly bad if they effectively spawned a competitor that is made up by all the other people that wanted to move faster

◧◩
35. hn_thr+UF[view] [source] [discussion] 2023-11-19 03:44:25
>>015a+w9
> These are absolutely signs of weakness in leadership.

The signs of "weakness in leadership" by the board already happened. There is no turning back from that. The only decision is how much continued fuck-uppery they want to continue with.

Like others have said, regardless of what is the "right" direction for OpenAI, the board executed this so spectacularly poorly that even if you believe everything that has been reported about their intentions (i.e. that Altman was more concerned about commercializing and productization of AI, while Sutskever was worried about the developing AI responsibly with more safeguards), all they've done is fucked over OpenAI.

I mean, given the reports about who has already resigned (not just Altman and Brockman but also other many other folks in top engineering leadership), it's pretty clear that plenty of other people would follow Altman to whatever AI venture he wants to build. If another competitor leap frogs OpenAI, their concerns about "moving too fast" will be irrelevant.

◧◩
36. hn_thr+IG[view] [source] [discussion] 2023-11-19 03:49:10
>>skygaz+R2
Yeah, I agree with that. I think the board has to have been genuinely surprised by the sheer blowback they're getting, i.e. not just Brockman quitting but lots of their other top engineering leaders.

Regarding your last sentence, it's pretty obvious that if Altman comes back, the current board will effectively be neutered (it says as much in the article). So my guess is that they're more in "what do we do to save OpenAI as an organization" than saving their own roles.

◧◩◪
37. 015a+dS[view] [source] [discussion] 2023-11-19 05:21:39
>>vikram+Ed
I tend to think: We, the armchair commentators, do not know what happened internally. I don't know enough to know that the board's execution wasn't the best case scenario to achieve their goal of aligning the entire organization with the non-profit's mission. All I feel comfortable saying with certainty is that: its messy. Anything like this would inevitably be messy.
replies(1): >>vikram+N11
◧◩◪◨⬒⬓⬔
38. int_19+IW[view] [source] [discussion] 2023-11-19 06:07:06
>>no_wiz+gr
The thing I really want to know is how many of the people who have already quit or have threatened to quit are actual researchers working on the base model, like Sutskever.
◧◩◪◨
39. vikram+N11[view] [source] [discussion] 2023-11-19 06:58:30
>>015a+dS
Right and thats what I'm saying. It's messy. They screwed up. Messy is bad. If they needed to get rid of him this last minute and make a statement 30 minutes before market close, then the failure happened earlier.
40. roguas+JZ1[view] [source] 2023-11-19 15:42:30
>>hn_thr+(OP)
Acknowledging a mistake so early seems like a sign of weakness to me. Hold the hot rod for at least a minute, see if the initial pain goes away. After that acknowledgement may begin to look like part of learning and get more acceptance, rather than: oopsie doodl, revert now!!!
[go to top]