zlacker

[parent] [thread] 4 comments
1. seanhu+(OP)[view] [source] 2024-05-17 15:51:06
Completely agree with this. Everyone doing AI has to do AI that is both valuable and responsible. You can't have an org structure where the valuable AI and responsible AI teams are in some kind of war against each other. It will just never work well.[1]

Imagine if vehicle manufacturers[2] split their design and R&D teams into a "make the thing go" team and a "don't kill the passengers" team. Literally noone would think that arrangement made sense.

I can totally see when we are at a state of significant maturity of both AI and AI regulation that you have a special part of your legal team that are specialised in legal/regulatory compliance issues around AI just like companies tend to have specialised data privacy compliance experts. But we're not there yet.

[1] If you're serious about long-term AI risk and alignment research, sponsor some independent academic research that gets published. That way it's arms-length and genuinely credible.

[2] If you like you can maybe mentally exclude Boeing in this.

replies(2): >>mgdev+b2 >>doktri+16
2. mgdev+b2[view] [source] 2024-05-17 16:04:20
>>seanhu+(OP)
> Imagine if vehicle manufacturers[2] split their design and R&D teams into a "make the thing go" team and a "don't kill the passengers" team.

Rocket caskets. Can't kill someone who is already dead!

3. doktri+16[view] [source] 2024-05-17 16:26:06
>>seanhu+(OP)
1. Audit / evaluation / quality assurance teams exist across multiple verticals from multinationals to government, and cannot reliably function when overly subservient to the production or “value creating” side

2. Boeing is a good and timely example of the consequences of said internal checks and balances collapsing under “value creation” pressure. That was a catastrophic failure which still can’t reasonably be compared to the downside of misaligned AI.

replies(1): >>seanhu+Q6
◧◩
4. seanhu+Q6[view] [source] [discussion] 2024-05-17 16:30:22
>>doktri+16
I agree with you on both points, but they have QA which is 1. The long-term risk team was more of a research/futurology/navel-gazing entity rather than a qa/audit function. I would say if you have any possible safety/alignment test that you can feasibly run it should be part of the CI/CD pipline and be run during training also. That's not what that group was doing.
replies(1): >>doktri+NM1
◧◩◪
5. doktri+NM1[view] [source] [discussion] 2024-05-18 11:22:33
>>seanhu+Q6
That’s quite the narrow goalpost you’ve set up. What happens if a problem can’t be expressed as a Jenkins pipeline operation?
[go to top]