zlacker

[parent] [thread] 3 comments
1. Hasu+(OP)[view] [source] 2024-05-17 15:52:09
The purpose of the risk team at OpenAI was to prevent the destruction of humanity.

I think you definitely want people who have that responsibility to "wield their authority like a whip without regard for business value".

Now, whether you buy OpenAI's hype about the potential danger (and value) of their products, that's up to you, but when the company says, "We're getting rid of the team that makes sure we don't kill everyone", there is a message being sent. Whether it's "We don't really think our technology is that dangerous (and therefore valuable)" or "We don't really care if we accidentally kill everyone", it's not a good message.

replies(2): >>square+c2 >>mgdev+S3
2. square+c2[view] [source] 2024-05-17 16:04:56
>>Hasu+(OP)
> but when the company says, "We're getting rid of the team that makes sure we don't kill everyone", there is a message being sent

Hard not to imagine a pattern if one considers what they did a few months ago:

https://www.cnbc.com/2024/01/16/openai-quietly-removes-ban-o...

replies(1): >>mnky98+Qj1
3. mgdev+S3[view] [source] 2024-05-17 16:14:52
>>Hasu+(OP)
> The purpose of the risk team at OpenAI was to prevent the destruction of humanity.

Yeah, the problem (in this outsider's opinion) is that that charter is so ill-defined that it's practically useless, which in turn means that any sufficiently loud voice can apply it to anything. It's practically begging to be used as a whip.

> I think you definitely want people who have that responsibility to "wield their authority like a whip without regard for business value".

No, because without thinking of value, there is no enterprise, and then your mission is impossible. Essentially, it creates an incentive where the best outcome is to destroy the company. And, hey hey, that's kinda what almost happened.

> Whether it's "We don't really think our technology is that dangerous (and therefore valuable)" or "We don't really care if we accidentally kill everyone", it's not a good message.

I don't think it has to be so black-and-white as this. Meta, Microsoft, and Google did the same thing. Instead, those functions have been integrated more closely into the value teams. And I can't imagine Amazon or Oracle ever having those teams in the first place. They likely all realized the same thing: those teams add huge drag without adding measurable business value.

And yes, there are ways to measure the business value of risk management, and weigh against upside value to decide the correct course of actions - it's just that most of those teams in big tech don't actually take a formal "risk management" approach. Instead they pontificate or copy and enforce.

◧◩
4. mnky98+Qj1[view] [source] [discussion] 2024-05-18 03:27:04
>>square+c2
Maybe the message is, these ai ain't going to take over the world.
[go to top]