zlacker

[return to "OpenAI's Long-Term AI Risk Team Has Disbanded"]
1. mgdev+B4[view] [source] 2024-05-17 15:44:40
>>robbie+(OP)
Yes, it's valuable to have a small research team who focuses on R&D outside the production loop.

But when you give them a larger remit, and structure teams with some owning "value" and and others essentially owning "risk", the risk teams tend to attract navel-gazers and/or coasters. They wield their authority like a whip without regard for business value.

The problem is the incentives tend to be totally misaligned. Instead the team that ships the "value" also needs to own their own risk management - metrics and counter metrics - with management holding them accountable for striking the balance.

◧◩
2. davidi+Hd[view] [source] 2024-05-17 16:35:58
>>mgdev+B4
So what you're saying is OpenAI can't align two teams internally but they want to align a super intelligence.
◧◩◪
3. mgdev+Rh[view] [source] 2024-05-17 17:01:15
>>davidi+Hd
I think "aligning super intelligence" is a nothingburger of a goal, for exactly that reason. It's not a problem that's unique to OpenAI.

The reason you can't "align" AI is because we, as humans on the planet, aren't universally aligned on what "aligned" means.

At best you can align to a particular group of people (a company, a town, a state, a country). But "global alignment" in almost any context just devolves into war or authoritarianism (virtual or actual).

[go to top]