zlacker

[parent] [thread] 1 comments
1. energy+(OP)[view] [source] 2025-05-07 06:44:44
I'll hand it to you that only part of the problem is easily represented in automatic verification. It's not easy to design a good reward model for softer things like architectural choices, asking for feedback before starting a project, etc. The LLM will be trained to make the tests pass, and make the code take some inputs and produce desired outputs, and it will do that better than any human, but that is going to be slightly misaligned with what we actually want.

So, it doesn't map cleanly onto previously solved problems, even though there's a decent amount of overlap. But I'd like to add a question to this discussion:

- Can we design clever reward models that punish bad architectural choices, executing on unclear intent, etc? I'm sure there's scope beyond the naive "make code that maps input -> output", even if it requires heuristics or the like.

replies(1): >>tomato+aR
2. tomato+aR[view] [source] 2025-05-07 14:28:32
>>energy+(OP)
the promo process :P no noise there!
[go to top]