zlacker

[return to "How does misalignment scale with model intelligence and task complexity?"]
1. smy200+m7[view] [source] 2026-02-03 01:16:07
>>salkah+(OP)
I think It's not because AI working on "misaligned" goals. The user never specify the goal clearly enough for AI system to work.

However, I think producing detailed enough specification requires same or even larger amount of work than writing code. We write rough specification and clarify these during the process of coding. I think there are minimal effort required to produce these specification, AI will not help you speed up these effort.

◧◩
2. crabmu+p8[view] [source] 2026-02-03 01:23:41
>>smy200+m7
That makes me wonder about the "higher and higher-level language" escalator. When you're writing in assembly, is it more work to write the code than the spec? And the reverse is true if you can code up your system in Ruby? If so, does that imply anything about the "spec driven" workflow people are using with AIs? Are we right on the cusp where writing natural language specs and writing high level code are comparably productive?
◧◩◪
3. charci+ra[view] [source] 2026-02-03 01:37:14
>>crabmu+p8
If you are on the same wave length as someone you don't need to produce a full spec. You can trust that the other person has the same vision as you and will pick reasonable ways to implement things. This is one reason why personalized AI agents are important.
[go to top]