zlacker

[return to "How does misalignment scale with model intelligence and task complexity?"]
1. gnarlo+AF[view] [source] 2026-02-03 05:58:20
>>salkah+(OP)
I feel vindicated when I say that the superintelligence control problem is a total farce, we won't get to superintelligence, it's tantamount to a religious belief. The real problem is the billionaire control problem. The human-race-on-earth control problem.
◧◩
2. HNisCI+pI[view] [source] 2026-02-03 06:22:33
>>gnarlo+AF
Yeah article aside, looking back on all the AGI stuff from the last year or so really puts our current moment in protective.

This whole paradigm of AI research is cool and all but it's ultimately a simple machine that probabilistically forms text. It's really good at making stuff that sounds smart but like looking at an AI picture, it falls apart the harder you look at it. It's good at producing stuff that looks like code and often kinda works but based on the other comments in this thread I don't think people really grasp how these models work.

[go to top]