zlacker

[parent] [thread] 8 comments
1. chatma+(OP)[view] [source] 2023-11-19 04:17:49
There's a distinction between what's technically allowed and what's politically allowed. The board has every right to vote Sam and Greg off the island with 4/6 voting in favor. That doesn't mean they won't see resistance to their decision on other fronts, and especially those where Sam and Greg have enough soft power that the rest of the board would be obviously inadvised to contradict them. If the entire media apparatus is on their side, for example (soft power), then the rest of the board needs to consider that before making a decision that they're technically empowered to make (hard power).

IMO, there are basically two justifiably rational moves here: (1) ignore the noise; accept that Sam and Greg have the soft power, but they don't have the votes so they can fuck off; (2) lean into the noise; accept that you made a mistake in firing Sam and Greg and bring them back in a show of magnanimity.

Anything in between these two options is hedging their bets and will lead to them getting eaten alive.

replies(1): >>tsunam+93
2. tsunam+93[view] [source] 2023-11-19 04:44:44
>>chatma+(OP)
Except You are discounting the major player with all the hard power who can literally call any shot with money
replies(2): >>chatma+84 >>Random+y5
◧◩
3. chatma+84[view] [source] [discussion] 2023-11-19 04:52:24
>>tsunam+93
You mean Microsoft, who hasn't actually paid them the money they said they will eventually, and who can change their Azure billing arrangement at any time?

Sure, I guess I didn't consider them, but you can lump them into the same "media campaign" (while accepting that they're applying some additional, non-media related leverage) and you'll come to the same conclusion: the board is incompetent. Really the only argument I see against this is that the legal structure of OpenAI is such that it's actually in the board's best interest to sabotage the development of the underlying technology (i.e. the "contain the AGI" hypothesis, which I don't personally subscribe to - IMO the structure makes such decisions more difficult for purely egotistical reasons; a profit motive would be morally clarifying).

◧◩
4. Random+y5[view] [source] [discussion] 2023-11-19 05:02:30
>>tsunam+93
The objective functions might be different enough and then there is nothing the hard power can do to get what it wants from OpenAI. Non-profit might consider winddown more in line with mission than something else, for example.
replies(1): >>chatma+c7
◧◩◪
5. chatma+c7[view] [source] [discussion] 2023-11-19 05:19:20
>>Random+y5
The threat to the hard power is that a new company emerges to compete with them, and it's led by the same people they just fired.

If your objective is to suppress the technology, the emergence of an equally empowered competitor is not a development that helps your cause. In fact there's this weird moral ambiguity where your best move is to pretend to advance the tech while actually sabotaging it. Whereas by attempting to simply excise it from your own organization's roadmap, you push its development outside your control (since Sam's Newco won't be beholden to any of your sanctimonious moral constraints). And the unresolvability of this problem, IMO, is evidence of why the non-profit motive can't work.

As a side-note: it's hilarious that six months ago OpenAI (and thus Sam) was the poster child for the nanny AI that knows what's best for the user, but this controversy has inverted that perception to the point that most people now see Sam as a warrior for user-aligned AGI... the only way he could fuck this up is by framing the creation of Newco as a pursuit of safety.

replies(1): >>Random+M7
◧◩◪◨
6. Random+M7[view] [source] [discussion] 2023-11-19 05:24:17
>>chatma+c7
If they cannot fulfill their mission one way or another (because it isn't resolvable in the structure) than dissolution isn't a bad option, I'd say.
replies(1): >>chatma+k8
◧◩◪◨⬒
7. chatma+k8[view] [source] [discussion] 2023-11-19 05:28:59
>>Random+M7
That's certainly a purist way of looking at it, and I don't disagree that it's the most aligned with their charter. But it also seems practically ineffective, even - no, especially - when considered within the context of that charter. Because by shutting it down (or sabotaging it), they're not just making a decision about their own technology; they're also yielding control of it to groups that are not beholden to the same constraints.
replies(1): >>Random+G8
◧◩◪◨⬒⬓
8. Random+G8[view] [source] [discussion] 2023-11-19 05:31:40
>>chatma+k8
Given that their control over the technology at large is limited anyway, they are already (somewhat?) ineffective, I would think. Not sure what a really good and attainable position for them would like be in that respect.
replies(1): >>chatma+Q8
◧◩◪◨⬒⬓⬔
9. chatma+Q8[view] [source] [discussion] 2023-11-19 05:33:07
>>Random+G8
Yeah, agreed. But that's also why I feel the whole moral sanctimony is a pointless pursuit in the first place. The tech is coming, from somewhere, whether you like it or not. Never in history has a technological revolution been stopped.
[go to top]