zlacker

[return to "How does misalignment scale with model intelligence and task complexity?"]
1. jmtull+P8[view] [source] 2026-02-03 01:26:43
>>salkah+(OP)
The comments so far seem focused on taking a cheap shot, but as somebody working on using AI to help people with hard, long-term tasks, it's a valuable piece of writing.

- It's short and to the point

- It's actionable in the short term (make sure the tasks per session aren't too difficult) and useful for researchers in the long term

- It's informative on how these models work, informed by some of the best in the business

- It gives us a specific vector to look at, clearly defined ("coherence", or, more fun, "hot mess")

◧◩
2. nth21+Hw2[view] [source] 2026-02-03 17:39:41
>>jmtull+P8
There’s not a useful argument here. The article is using current AI to extrapolate future AI failure modes. If future AI models solve the ‘incoherence’ problem, that leaves bias as a primary source of failure (according to the author these are the only two possible failure modes apparently).
[go to top]