They have been wrong every time and will continue to be wrong.
Autoregressive LLMs still have some major issues like over-dependency on the first few generated tokens and the problems with commutative reasoning due to one-sided masked attention but those issues are slowly getting fixed.
And at the end of the day they went nowhere. Because (a) they will never be perfect for every use and (b) they abstract you from understanding the problem and solution. So often it will be easier to just write the code from scratch.