zlacker

[return to "How does misalignment scale with model intelligence and task complexity?"]
1. gnarlo+AF[view] [source] 2026-02-03 05:58:20
>>salkah+(OP)
I feel vindicated when I say that the superintelligence control problem is a total farce, we won't get to superintelligence, it's tantamount to a religious belief. The real problem is the billionaire control problem. The human-race-on-earth control problem.
◧◩
2. MrOrel+fG[view] [source] 2026-02-03 06:04:20
>>gnarlo+AF
I don’t believe the article makes any claims on the infeasibility of a future ASI. It just explores likely failure modes.

It is fine to be worried about both alignment risks and economic inequality. The world is complex, there are many problems all at once, we don’t have to promote one at the cost of the other.

[go to top]