zlacker

[parent] [thread] 7 comments
1. tsunam+(OP)[view] [source] 2023-11-19 04:44:44
Except You are discounting the major player with all the hard power who can literally call any shot with money
replies(2): >>chatma+Z >>Random+p2
2. chatma+Z[view] [source] 2023-11-19 04:52:24
>>tsunam+(OP)
You mean Microsoft, who hasn't actually paid them the money they said they will eventually, and who can change their Azure billing arrangement at any time?

Sure, I guess I didn't consider them, but you can lump them into the same "media campaign" (while accepting that they're applying some additional, non-media related leverage) and you'll come to the same conclusion: the board is incompetent. Really the only argument I see against this is that the legal structure of OpenAI is such that it's actually in the board's best interest to sabotage the development of the underlying technology (i.e. the "contain the AGI" hypothesis, which I don't personally subscribe to - IMO the structure makes such decisions more difficult for purely egotistical reasons; a profit motive would be morally clarifying).

3. Random+p2[view] [source] 2023-11-19 05:02:30
>>tsunam+(OP)
The objective functions might be different enough and then there is nothing the hard power can do to get what it wants from OpenAI. Non-profit might consider winddown more in line with mission than something else, for example.
replies(1): >>chatma+34
◧◩
4. chatma+34[view] [source] [discussion] 2023-11-19 05:19:20
>>Random+p2
The threat to the hard power is that a new company emerges to compete with them, and it's led by the same people they just fired.

If your objective is to suppress the technology, the emergence of an equally empowered competitor is not a development that helps your cause. In fact there's this weird moral ambiguity where your best move is to pretend to advance the tech while actually sabotaging it. Whereas by attempting to simply excise it from your own organization's roadmap, you push its development outside your control (since Sam's Newco won't be beholden to any of your sanctimonious moral constraints). And the unresolvability of this problem, IMO, is evidence of why the non-profit motive can't work.

As a side-note: it's hilarious that six months ago OpenAI (and thus Sam) was the poster child for the nanny AI that knows what's best for the user, but this controversy has inverted that perception to the point that most people now see Sam as a warrior for user-aligned AGI... the only way he could fuck this up is by framing the creation of Newco as a pursuit of safety.

replies(1): >>Random+D4
◧◩◪
5. Random+D4[view] [source] [discussion] 2023-11-19 05:24:17
>>chatma+34
If they cannot fulfill their mission one way or another (because it isn't resolvable in the structure) than dissolution isn't a bad option, I'd say.
replies(1): >>chatma+b5
◧◩◪◨
6. chatma+b5[view] [source] [discussion] 2023-11-19 05:28:59
>>Random+D4
That's certainly a purist way of looking at it, and I don't disagree that it's the most aligned with their charter. But it also seems practically ineffective, even - no, especially - when considered within the context of that charter. Because by shutting it down (or sabotaging it), they're not just making a decision about their own technology; they're also yielding control of it to groups that are not beholden to the same constraints.
replies(1): >>Random+x5
◧◩◪◨⬒
7. Random+x5[view] [source] [discussion] 2023-11-19 05:31:40
>>chatma+b5
Given that their control over the technology at large is limited anyway, they are already (somewhat?) ineffective, I would think. Not sure what a really good and attainable position for them would like be in that respect.
replies(1): >>chatma+H5
◧◩◪◨⬒⬓
8. chatma+H5[view] [source] [discussion] 2023-11-19 05:33:07
>>Random+x5
Yeah, agreed. But that's also why I feel the whole moral sanctimony is a pointless pursuit in the first place. The tech is coming, from somewhere, whether you like it or not. Never in history has a technological revolution been stopped.
[go to top]