Asking one to make changes to such a code set, and you will get whatever branch the dice told the tree to go down that day.
To paraphrase, “LLMs are like a box of chocolates…”.
And if you have the patience to try and tack the AI to get back on track, you probably could have just done the work faster yourself.
Has anyone come close to solving this? I keep seeing all of this "cluster of agents" designs that promise to solve all of our problems but I can't help but wonder how it works out in the first place given they're not deterministic.