zlacker

[parent] [thread] 4 comments
1. arkety+(OP)[view] [source] 2023-11-22 07:02:03
For all the talk about responsible progress, the irony of their inability to align even their own incentives in this enterprise deserves ridicule. It's a big blow to their credibility and questions whatever ethical concerns they hold.
replies(2): >>dmix+n1 >>concor+ed
2. dmix+n1[view] [source] 2023-11-22 07:11:28
>>arkety+(OP)
It's fear driven as much as moral, which in an emotional humans brain tends to triggers personal ambition to solve it ASAP. A more rational one would realize you need more than just a couple board members to win a major ideological battle.

At a minimum something that doesn't immediately result in a backlash where 90% of the engineers most responsible for recent AI dev want you gone, when you're whole plan is to control what those people do.

3. concor+ed[view] [source] 2023-11-22 08:39:39
>>arkety+(OP)
Alignment is considered an extremely hard problem for a reason. It's already nigh impossible when you're dealing with humans.

Btw: do you think ridicule eould be helpful here?

replies(1): >>arkety+4e
◧◩
4. arkety+4e[view] [source] [discussion] 2023-11-22 08:45:58
>>concor+ed
I can see how ridicule of this specific instance could be the best medicine for an optimal outcome, even by a utilitarian argument, which I generally don't like to make by the way. It is indeed nigh impossible, which is kind of my point. They could have shown more humility. If anything, this whole debacle has been a moral victory for e/acc, seeing how the brightest of minds are at a loss dealing with alignment anyway.
replies(1): >>Feepin+Kh
◧◩◪
5. Feepin+Kh[view] [source] [discussion] 2023-11-22 09:19:46
>>arkety+4e
I don't understand how the conclusion of this is "so we should proceed with AI" rather than "so we should immediately outlaw all foundation model training". Clearly corporate self-governance has failed completely.
[go to top]