Now, I don't trust the output - I review everything, and it often goes wrong. You have to know how to use it. But I would never go back. Often it comes up with more elegant solutions than I would have. And when you're working with a new platform, or some unfamiliar library that it already knows, it's an absolute godsend.
I'm also damn proud of my own hand-crafted code, but to avoid LLMs out of principal? That's just luddite.
20+ years of experience across game dev, mobile and web apps, in case you feel it relevant.
Getting to sit down and write the code is the most enjoyable part of the job, why would I deprive myself of that? By the time the problem has been defined well enough to explain it to an LLM sitting down and writing the code is typically very simple.
Don't get me started on testcase generation.
And yet the time it takes me to use the LLM and correct its output is usually faster than not using it at all.
Over time I've developed a good sense for what tasks it succeeds at (or is only trivially wrong) and what tasks it's just not up for.
Code review is difficult to get right, especially if the goal is judging correctness. Maybe this is a personal failing, but I find being actively engaged to be a critical part of the process; the more time I spend with the code I'm maintaining (and usually on call for!) the better understanding I have. Tedium can sometimes be a great signal for an abstraction!
What I've found frustrating about the narrative around these tools; I've watched them from afar with intrigue but ultimately found that method of working just isn't for me. Over the years I've trialed more tools than I can remember and adopted the ones I found useful, while casting aside ones that aren't a great fit. Sometimes I find myself wandering back to them once they're fully baked. Maybe that will be the case here, but is it not valid to say "eh...this isn't it for me"? Am I kidding myself?
The folly of single ended metrics.
> but to avoid LLMs out of principal? That's just luddite.
Do you double check that the LLM hasn't magically recreated someone else's copyrighted code? That's just irresponsible in certain contexts.
> in case you feel it relevant.
Of course it's relevant. If a 19 year old with 1 year of driving experience tries to sell me a car using their personal anecdote as a metric I'd be suspicious. If their only salient point is that "it gets me to where I'm going faster!" I'd be doubly suspicious.
I frankly do not care, and I expect LLMs to become such ubiquitous table-stakes that I don't think anyone will really care in the long run.
I can imagine that LLM is really helpful in some cases for some people. But so far, I couldn’t find a single example when I and simple copy-pasting wouldn’t have been faster. Not even when I tried it, not when others showed me how to use it.
Unless they develop entirely new technology they're stuck with linear growth of output capability for input costs. This will take a very long time. I expect it to be abandoned in favor of better ideas and computing interfaces. "AI" always seems to bloom right before a major shift in computing device capability and mobility and then gets left behind. I don't see anything special about this iteration.
> that I don't think anyone will really care in the long run.
There are trillions of dollars at stake and access to even the basics of this technology is far from egalitarian or well distributed. Until it is I would expect people who's futures and personal wealth depends on it to care quite a bit. In the meanwhile you might just accelerate yourself into a lawsuit.
Like how McDonalds makes a lot of burgers fast and they are very successful so that's all we really care about?
If you merge a ball of generated crap into `main`, I don't so much have to wonder if you would have done a better job by hand.
That's where I'm confused. I've been coding for more than 20 years, and every task I ever did was different from the other ones. What kind of task do you do a million times before realizing that you should script it in bash or Python?
Watch out, you’re giving your game away.
My job is about enabling analysis that was previously done ad hoc and informally. If I’m harming people then that’s something I have to take responsibility for, but it’s also caused not by my direct contribution but by the larger system that I’m working within.
I expressively don’t want to automate away work when that will just result in more profit for private owners and less income for regular working people.[1] And I also don’t want to automate work if that means shifting drudgery to some worker to fill in that freed up time.
And how does this contradict what “we” are doing and stand for!? We criticize technology on this board all the time!
But it’s nice to have the priorities of such a prominent member on the record.
[1] But I DO want to automate work in the hypothetical society where we all own the automation and thus the only thing we are deprived of is drudgery.