zlacker

[return to "How does misalignment scale with model intelligence and task complexity?"]
1. Curiou+g4[view] [source] 2026-02-03 00:56:41
>>salkah+(OP)
This is a good line: "It found that smarter entities are subjectively judged to behave less coherently"

I think this is twofold:

1. Advanced intelligence requires the ability to traverse between domain valleys in the cognitive manifold. Be it via temperature or some fancy tunneling technique, it's going to be higher error (less coherent) in the valleys of the manifold than naive gradient following to the local minima.

2. It's hard to "punch up" when evaluating intelligence. When someone is a certain amount smarter than you, distinguishing their plausible bullshit from their deep insights is really, really hard.

◧◩
2. p-e-w+Te[view] [source] 2026-02-03 02:05:09
>>Curiou+g4
> When someone is a certain amount smarter than you, distinguishing their plausible bullshit from their deep insights is really, really hard.

Insights are “deep” not on their own merit, but because they reveal something profound about reality. Such a revelation is either testable or not. If it’s testable, distinguishing it from bullshit is relatively easy, and if it’s not testable even in principle, a good heuristic is to put it in the bullshit category by default.

◧◩◪
3. skydha+Zf[view] [source] 2026-02-03 02:13:01
>>p-e-w+Te
The issue is the revelation. It's always individual at some level. And don't forget our senses are crude. The best way is to store "insights" as information until we collect enough data that we can test it again (hopefully without a lot of bias). But that can be more than a lifetime work, so sometimes you have to take some insights at face value based on heuristics (parents, teachers, elder, authority,...)
[go to top]