zlacker

[return to "OpenAI's Long-Term AI Risk Team Has Disbanded"]
1. mgdev+B4[view] [source] 2024-05-17 15:44:40
>>robbie+(OP)
Yes, it's valuable to have a small research team who focuses on R&D outside the production loop.

But when you give them a larger remit, and structure teams with some owning "value" and and others essentially owning "risk", the risk teams tend to attract navel-gazers and/or coasters. They wield their authority like a whip without regard for business value.

The problem is the incentives tend to be totally misaligned. Instead the team that ships the "value" also needs to own their own risk management - metrics and counter metrics - with management holding them accountable for striking the balance.

◧◩
2. seanhu+K5[view] [source] 2024-05-17 15:51:06
>>mgdev+B4
Completely agree with this. Everyone doing AI has to do AI that is both valuable and responsible. You can't have an org structure where the valuable AI and responsible AI teams are in some kind of war against each other. It will just never work well.[1]

Imagine if vehicle manufacturers[2] split their design and R&D teams into a "make the thing go" team and a "don't kill the passengers" team. Literally noone would think that arrangement made sense.

I can totally see when we are at a state of significant maturity of both AI and AI regulation that you have a special part of your legal team that are specialised in legal/regulatory compliance issues around AI just like companies tend to have specialised data privacy compliance experts. But we're not there yet.

[1] If you're serious about long-term AI risk and alignment research, sponsor some independent academic research that gets published. That way it's arms-length and genuinely credible.

[2] If you like you can maybe mentally exclude Boeing in this.

◧◩◪
3. doktri+Lb[view] [source] 2024-05-17 16:26:06
>>seanhu+K5
1. Audit / evaluation / quality assurance teams exist across multiple verticals from multinationals to government, and cannot reliably function when overly subservient to the production or “value creating” side

2. Boeing is a good and timely example of the consequences of said internal checks and balances collapsing under “value creation” pressure. That was a catastrophic failure which still can’t reasonably be compared to the downside of misaligned AI.

◧◩◪◨
4. seanhu+Ac[view] [source] 2024-05-17 16:30:22
>>doktri+Lb
I agree with you on both points, but they have QA which is 1. The long-term risk team was more of a research/futurology/navel-gazing entity rather than a qa/audit function. I would say if you have any possible safety/alignment test that you can feasibly run it should be part of the CI/CD pipline and be run during training also. That's not what that group was doing.
◧◩◪◨⬒
5. doktri+xS1[view] [source] 2024-05-18 11:22:33
>>seanhu+Ac
That’s quite the narrow goalpost you’ve set up. What happens if a problem can’t be expressed as a Jenkins pipeline operation?
[go to top]