zlacker

[return to "How does misalignment scale with model intelligence and task complexity?"]
1. jmtull+P8[view] [source] 2026-02-03 01:26:43
>>salkah+(OP)
The comments so far seem focused on taking a cheap shot, but as somebody working on using AI to help people with hard, long-term tasks, it's a valuable piece of writing.

- It's short and to the point

- It's actionable in the short term (make sure the tasks per session aren't too difficult) and useful for researchers in the long term

- It's informative on how these models work, informed by some of the best in the business

- It gives us a specific vector to look at, clearly defined ("coherence", or, more fun, "hot mess")

◧◩
2. nth21+Hw2[view] [source] 2026-02-03 17:39:41
>>jmtull+P8
There’s not a useful argument here. The article is using current AI to extrapolate future AI failure modes. If future AI models solve the ‘incoherence’ problem, that leaves bias as a primary source of failure (according to the author these are the only two possible failure modes apparently).
◧◩◪
3. toroid+am3[view] [source] 2026-02-03 21:15:48
>>nth21+Hw2
That doesn't seem like a useful argument either.

If future AI only manages to solve the variance problem, then it will have problems related to bias.

If future AI only manages to solve the bias problem, then it will have problems related to variance.

If problem X is solved, then the system that solved it won't have problem X. That's not very informative without some idea of how likely it is that X can or will be solved, and current AI is a better prior than "something will happen".

[go to top]