zlacker

[return to "OpenAI's Long-Term AI Risk Team Has Disbanded"]
1. mgdev+B4[view] [source] 2024-05-17 15:44:40
>>robbie+(OP)
Yes, it's valuable to have a small research team who focuses on R&D outside the production loop.

But when you give them a larger remit, and structure teams with some owning "value" and and others essentially owning "risk", the risk teams tend to attract navel-gazers and/or coasters. They wield their authority like a whip without regard for business value.

The problem is the incentives tend to be totally misaligned. Instead the team that ships the "value" also needs to own their own risk management - metrics and counter metrics - with management holding them accountable for striking the balance.

◧◩
2. 23B1+16[view] [source] 2024-05-17 15:52:46
>>mgdev+B4
No, this just puts the fox in the henhouse. Systems of checks-and-balances between independent entities exist for a reason.

Without them internally, it'll just fall to regulators, which of course is what shareholders want; to privatize upside and socialize downside.

◧◩◪
3. mgdev+i7[view] [source] 2024-05-17 15:59:48
>>23B1+16
Agree that you need checks and balances, but there are better and worse systems.

As someone who has scaled orgs from tens to thousands of engineers, I can tell you: you need value teams to own their own risk.

A small, central R&D team may work with management to set the bar, but they can't be responsible for mitigating the risk on the ground - and they shouldn't be led to believe that that is their job. It never works, and creates bad team dynamics. Either the central team goes too far, or they feel ignored. (See: security, compliance.)

[go to top]