this stuff works. it takes effort and learning. it’s not going to magically solve high-complexity tasks (or even low-complexity ones) without investment. having people use it, learn how it works, and improve the systems is the right approach
a lot of armchair engineers in here
And here we have many examples from the biggest bullshit pushers in the whole market of their state of the art tool being hilariously useless in trivial cases. These PRs are about as simple as you can get without it being a typo fix, and we're all seeing it actively bullshit and straight up contradict itself many times, just as anyone who's ever used LLMs would tell you happens all the time.
The supposed magic, omnipotent tool that is AI apparently can't even write test scaffolding without a human telling it exactly what it has to do, yet we're supposed to be excited about this crap? If I saw a PR like this at work, I'd be going straight to my manager to have whoever dared push this kind of garbage reprimanded on the spot, except not even interns are this incompetent and annoying to work with.
you’re taking an anecdote and blowing it out of proportion to fit your preformed opinion. yes, when you start with the tool and do literally no work it makes bad PRs. yes, it’s early and experimental. that doesn’t mean it doesn’t work (I have plenty of anecdotes that it does!)
the truth lies in between and the mob mentality it’s magic or complete bullshit doesn’t help. I’d love to come to a thread like this and actually hear about real experiences from smart people using these kind of tools, but instead we get this bullshit