zlacker

[return to "How does misalignment scale with model intelligence and task complexity?"]
1. Curiou+g4[view] [source] 2026-02-03 00:56:41
>>salkah+(OP)
This is a good line: "It found that smarter entities are subjectively judged to behave less coherently"

I think this is twofold:

1. Advanced intelligence requires the ability to traverse between domain valleys in the cognitive manifold. Be it via temperature or some fancy tunneling technique, it's going to be higher error (less coherent) in the valleys of the manifold than naive gradient following to the local minima.

2. It's hard to "punch up" when evaluating intelligence. When someone is a certain amount smarter than you, distinguishing their plausible bullshit from their deep insights is really, really hard.

◧◩
2. boolea+0O[view] [source] 2026-02-03 07:12:52
>>Curiou+g4
> the ability to traverse between domain valleys in the cognitive manifold.

Couldn't you have just said "know about a lot of different fields"? Was your comment sarcastic or do you actually talk like that?

◧◩◪
3. reveri+T91[view] [source] 2026-02-03 10:03:18
>>boolea+0O
I think they mean both "know about a lot of different fields" and also "be able to connect them together to draw inferences", the latter perhaps being tricky?
◧◩◪◨
4. boolea+7B1[view] [source] 2026-02-03 13:21:24
>>reveri+T91
Maybe? They should speak more clearly regardless, so we don't have to speculate over it. The way you worded it is much more understandable.
◧◩◪◨⬒
5. pixl97+7C2[view] [source] 2026-02-03 18:00:05
>>boolea+7B1
There wasn't much room to speculate really, but requires some knowledge of understanding problem spaces, topology, and things like minima and maxima.
◧◩◪◨⬒⬓
6. reveri+jz4[view] [source] 2026-02-04 06:07:57
>>pixl97+7C2
"inaccessible" rather than "ambiguous" -- but to the uninitiated they are hard to tell apart.
[go to top]