zlacker

[parent] [thread] 8 comments
1. ben_w+(OP)[view] [source] 2023-11-22 08:40:34
He says nice things about his team (and even about his critics) when in public.

But my reading of this drama is that the board were seen as literally insane, not that Altman was seen as spectacularly heroic or an underdog.

replies(2): >>stingr+73 >>bnralt+tP
2. stingr+73[view] [source] 2023-11-22 09:07:00
>>ben_w+(OP)
My reading of all this is that the board is both incompetent and has a number of massive conflicts of interests.

What I don’t understand is why they were allowed to stay on the board with all these conflicts of interests all the while having no (financial) stake in OpenAI. One of the board members even openly admitting that she considered destroying OpenAI a successful outcome of her duty as board member.

replies(2): >>Sebb76+b9 >>serial+D9
◧◩
3. Sebb76+b9[view] [source] [discussion] 2023-11-22 10:00:25
>>stingr+73
> One of the board members even openly admitting that she considered destroying OpenAI a successful outcome of her duty as board member.

I don't see how this particular statement underscores your point. OpenAI is a non-profit with the declared goal of making AI safe and useful for everyone; if it fails to reach that or even actively subverts that goal, destroying the company does seem like the ethical action.

replies(2): >>DebtDe+ck >>smegge+4W1
◧◩
4. serial+D9[view] [source] [discussion] 2023-11-22 10:03:33
>>stingr+73
It's probably not easy (practically impossible if you ask me) to find people who are both capable of leading an AI company at the scale of OpenAI and have zero conflicts of interest. Former colleagues, friends, investments, advisory roles, personal beefs with people in the industry, pitches they have heard, insider knowledge they had access to, previous academic research pushing an agenda, etc.

If both is not possible, I'd also rather compromise on the "conficts of interest" part than on the member's competency.

replies(1): >>cables+mk
◧◩◪
5. DebtDe+ck[view] [source] [discussion] 2023-11-22 11:38:43
>>Sebb76+b9
This just underscores the absurdity of their corporate structure. AI research requires expensive researchers and expensive GPUs. Investors funding the research program don't want to be beholden to some non-profit parent organization run by a small board of nobodies who think their position gives them the power to destroy the whole thing if they believe it's straying from its utopian mission.
replies(1): >>ethanb+qm
◧◩◪
6. cables+mk[view] [source] [discussion] 2023-11-22 11:40:16
>>serial+D9
I volunteer as tribute.

I don't have much in the way of credentials (I took one class on A.I. in college and have only dabbled in it since, and I work on systems that don't need to scale anywhere near as much as ChatGPT does, and while I've been an early startup employee a couple of times I've never run a company), but based on the past week I think I'd do a better job, and can fill in the gaps as best as I can after the fact.

And I don't have any conflicts of interest. I'm a total outsider, I don't have any of that shit you mentioned.

So yeah, vote for me, or whatever.

Anyway my point is I'm sure there's actually quite a few people who could do a likely a better job and don't have a conflict of interest (at least not one so obvious as investing in a direct competitor), they're just not already part of the Elite circles that would pretty much be necessary to even get on these people's radar in order to be considered in the first place. I don't really mean me, I'm sure there are other better candidates.

But then they wouldn't have the cachet of 'Oh, that guy co-founded Twitch. That for-profit company is successful, that must mean he'd do a good job! (at running a non-profit company that's actively trying to bring about AGI that will probably simultaneously benefit and hurt the lives of millions of people)'.

◧◩◪◨
7. ethanb+qm[view] [source] [discussion] 2023-11-22 11:57:02
>>DebtDe+ck
They don’t “think” that. It does do that, and it does it by design exactly because as you approach a technology as powerful as AI there will be strong commercial incentives to capture its value creation.

Gee wiz, almost… exactly like what is happening?

8. bnralt+tP[view] [source] 2023-11-22 14:42:35
>>ben_w+(OP)
Right. At least some of the board members took issue with ChatGPT being released at all, and wanted more to be kept from the public. For the people who use these tools everyday, it shouldn't be surprising that Altman was viewed as the better choice.
◧◩◪
9. smegge+4W1[view] [source] [discussion] 2023-11-22 19:42:16
>>Sebb76+b9
Because distroying openai wouldn't make ai safe it would just remove anyone working on alignment from having an influence on it. Microsoft and others are interested in making it benevolent but go along with it because openai is the market leader.
[go to top]