zlacker

[return to "OpenAI's Long-Term AI Risk Team Has Disbanded"]
1. mgdev+B4[view] [source] 2024-05-17 15:44:40
>>robbie+(OP)
Yes, it's valuable to have a small research team who focuses on R&D outside the production loop.

But when you give them a larger remit, and structure teams with some owning "value" and and others essentially owning "risk", the risk teams tend to attract navel-gazers and/or coasters. They wield their authority like a whip without regard for business value.

The problem is the incentives tend to be totally misaligned. Instead the team that ships the "value" also needs to own their own risk management - metrics and counter metrics - with management holding them accountable for striking the balance.

◧◩
2. germin+o7[view] [source] 2024-05-17 16:00:25
>>mgdev+B4
In the general case, I mostly agree, but it cracks me up that this is the prevailing attitude when it comes to our industry; but when we see police departments or government agencies trying to follow the same playbook, we immediately point out how that's laughable and doesn't result in real accountability.

In this specific case, though, Sam Altman's narrative is that they created an existential risk to humanity and that the access to it needs to be restricted for others. So which is it?

◧◩◪
3. nprate+d9[view] [source] 2024-05-17 16:12:02
>>germin+o7
Neither. It was pure hype to get free column inches.

Anyone who's used their AI and discovered how it ignores instructions and makes things up isn't going to honestly believe it poses an existential threat any time soon. Now they're big enough they can end that charade.

◧◩◪◨
4. saulpw+ea[view] [source] 2024-05-17 16:17:37
>>nprate+d9
Ignoring instructions and making things up may very well be an existential threat. Just not of the SkyNet variety.
[go to top]