zlacker

[return to "OpenAI's Long-Term AI Risk Team Has Disbanded"]
1. mgdev+B4[view] [source] 2024-05-17 15:44:40
>>robbie+(OP)
Yes, it's valuable to have a small research team who focuses on R&D outside the production loop.

But when you give them a larger remit, and structure teams with some owning "value" and and others essentially owning "risk", the risk teams tend to attract navel-gazers and/or coasters. They wield their authority like a whip without regard for business value.

The problem is the incentives tend to be totally misaligned. Instead the team that ships the "value" also needs to own their own risk management - metrics and counter metrics - with management holding them accountable for striking the balance.

◧◩
2. Hasu+U5[view] [source] 2024-05-17 15:52:09
>>mgdev+B4
The purpose of the risk team at OpenAI was to prevent the destruction of humanity.

I think you definitely want people who have that responsibility to "wield their authority like a whip without regard for business value".

Now, whether you buy OpenAI's hype about the potential danger (and value) of their products, that's up to you, but when the company says, "We're getting rid of the team that makes sure we don't kill everyone", there is a message being sent. Whether it's "We don't really think our technology is that dangerous (and therefore valuable)" or "We don't really care if we accidentally kill everyone", it's not a good message.

[go to top]