zlacker

[parent] [thread] 1 comments
1. Joshua+(OP)[view] [source] 2025-05-06 17:43:29
The training loop asked the model to one-shot working code for the given problems without being able to iterate. If you had to write code that had to work on the first try, and where a partially correct answer was better than complete failure, I bet your code would look like that too.

In any case, it knows what good code looks like. You can say "take this code and remove spurious comments and prefer narrow exception handling over catch-all", and it'll do just fine (in a way it wouldn't do just fine if your prompt told it to write it that way the first time, writing new code and editing existing code are different tasks).

replies(1): >>Neutra+zv
2. Neutra+zv[view] [source] 2025-05-06 21:11:53
>>Joshua+(OP)
It's only an example, there's pretty of irrelevant stuff that LLMs default to which is pretty bad Python. I'm not saying it's always bad but there's a ton of not so nice code or subtly wrong code generated (for example file and path manipulation).
[go to top]