zlacker

[parent] [thread] 4 comments
1. al_bor+(OP)[view] [source] 2026-01-27 00:19:03
> assuming enough training data

This is a big assumption. I write a lot of Ansible, and it can’t even format the code properly, which is a pretty big deal in yaml. It’s totally brain dead.

replies(1): >>simonw+z8
2. simonw+z8[view] [source] 2026-01-27 01:25:06
>>al_bor+(OP)
Have you tried telling it to run a script to verify that the YAML is valid? I imagine it could do that with Python.
replies(1): >>al_bor+Kl
◧◩
3. al_bor+Kl[view] [source] [discussion] 2026-01-27 03:19:31
>>simonw+z8
It gets it wrong 100% of the time. A script to validate would send it into an infinite loop of generating code and failing validation.
replies(1): >>simonw+pm
◧◩◪
4. simonw+pm[view] [source] [discussion] 2026-01-27 03:24:40
>>al_bor+Kl
Are you sure about that?

I don't think I've ever seen Opus 4.5 or GPT-5.2 get stuck in a loop like that. They're both very good at spotting when something doesn't work and trying something else instead.

Might be a problem with older, weaker models I guess.

replies(1): >>al_bor+2w
◧◩◪◨
5. al_bor+2w[view] [source] [discussion] 2026-01-27 05:02:24
>>simonw+pm
I’m limited on the tools and models I can use due to privacy restrictions at work.
[go to top]