I'll just keep chugging along, with debian, python and vim, as I always have. No LLM, no LSP, heck not even autocompletion. But damn proud of every hand crafted, easy to maintain and fully understood line of code I'll write.
In python I was scanning 1000’s of files each for thousands of keywords. A naive implementation took around 10 seconds, obviously the largest share of execution time after running instrumentation. A quick ChatGPT led me to Aho-Corasick and String searching algorithms, which I had never used before. Plug in a library and bam, 30x speed up for that part of the code.
I could have asked my knowledgeable friends and coworkers, but not at 11PM on a Saturday.
I could have searched the web and probably found it out.
But the LLM basically auto completed the web, which I appreciate.
Get friends with weirder daily schedules. :-)
Now, I don't trust the output - I review everything, and it often goes wrong. You have to know how to use it. But I would never go back. Often it comes up with more elegant solutions than I would have. And when you're working with a new platform, or some unfamiliar library that it already knows, it's an absolute godsend.
I'm also damn proud of my own hand-crafted code, but to avoid LLMs out of principal? That's just luddite.
20+ years of experience across game dev, mobile and web apps, in case you feel it relevant.
Because I can ship 2x to 5x more code with nearly the same quality.
My employer isn't paying me to be a craftsman. They're paying me to ship things that make them money.
Why are you cheapening the product, butchering the process and decimating any hope for further skill development by using these tools?
Instead of python, you should be using assembly or heck, just binary. Instead of relying on an OS abstraction layer made by someone else, you should write everything from scratch on the bare metal. Don't lower yourself by using a text editor, go hex. Then your code will truly be "hand crafted". You'll have even more reason to be proud.
Don’t get too hung up on what works for other people. That’s not a good look.
Getting to sit down and write the code is the most enjoyable part of the job, why would I deprive myself of that? By the time the problem has been defined well enough to explain it to an LLM sitting down and writing the code is typically very simple.
I’m a self-respecting software developer with 28 years of experience. I would, with some caveats, venture to say I am an expert in the trade.
AI helps me write good code somewhere between 3x and 10x faster.
This whole-cloth shallow dismissal of everything AI as worthless overhyped slop is just as tired and content-free as breathless claims of the limitless power or universal applicability of AI.
Anyways, Cursor generates all my code now.
Don't get me started on testcase generation.
And yet the time it takes me to use the LLM and correct its output is usually faster than not using it at all.
Over time I've developed a good sense for what tasks it succeeds at (or is only trivially wrong) and what tasks it's just not up for.
I've had a long-term code project that I've really struggled with, for various reasons. Instead of using my normal approach, which would be to lay out what I think the code should do, and how it should work, I just explained the problem and let the LLM worry about the code.
It got really far. I'm still impressed. Claude worked great, but ran out of free tokens or whatever, and refused to continue (fine, it was the freebie version and you get what you pay for). I picked it up again in Cursor and it got further. One of my conditions for this experiment was to never look at the code, just the output, and only talk to the LLM about what I wanted, not about how I wanted it done. This seemed to work better.
I'm hitting different problems, now, for sure. Getting it to test everything was tricky, and I'm still not convinced it's not just fixing the test instead of the code every time there's a test failure. Peeking at the code, there are several remnants of previous architectural models littering the codebase. Whole directories of unused, uncalled, code that got left behind. I would not ship this as it is.
But... it works, kinda. It's fast, I got a working demo of something 80% near what I wanted in 1/10 of the time it would have taken me to make that manually. And just focusing on the result meant that I didn't go down all the rabbit holes of how to structure the code or which paradigm to use.
I'm hooked now. I want to get better at using this tool, and see the failures as my failures in prompting rather than the LLM's failure to do what I want.
I still don't know how much work would be involved in turning the code into something I could actually ship. Maybe there's a second phase which looks more like conventional development cleaning it all up. I don't know yet. I'll keep experimenting :)
Code review is difficult to get right, especially if the goal is judging correctness. Maybe this is a personal failing, but I find being actively engaged to be a critical part of the process; the more time I spend with the code I'm maintaining (and usually on call for!) the better understanding I have. Tedium can sometimes be a great signal for an abstraction!
What I've found frustrating about the narrative around these tools; I've watched them from afar with intrigue but ultimately found that method of working just isn't for me. Over the years I've trialed more tools than I can remember and adopted the ones I found useful, while casting aside ones that aren't a great fit. Sometimes I find myself wandering back to them once they're fully baked. Maybe that will be the case here, but is it not valid to say "eh...this isn't it for me"? Am I kidding myself?
Once I had to look up a research paper to implement a computational geometry algorithm because I couldn't find it any of the typical Web sources. There were also no library to use with a license for our commercial use.
I'm not against use of "AI". But this increasing refusal of those who aspire to work in specialist domains like software development to systematically learn things is not great. That's just compounding on an already diminished capacity to process information skillfully.
The folly of single ended metrics.
> but to avoid LLMs out of principal? That's just luddite.
Do you double check that the LLM hasn't magically recreated someone else's copyrighted code? That's just irresponsible in certain contexts.
> in case you feel it relevant.
Of course it's relevant. If a 19 year old with 1 year of driving experience tries to sell me a car using their personal anecdote as a metric I'd be suspicious. If their only salient point is that "it gets me to where I'm going faster!" I'd be doubly suspicious.
i don't need to "hand write" every line and character in my code and guess what, it's still easy to understand and maintain because it's what would have written anyway. that or you're just bikeshedding minor syntax.
like if you want to be proud of a "hand built" house with hammer and nails be my guest, but don't conflate the two with always being well built.
I frankly do not care, and I expect LLMs to become such ubiquitous table-stakes that I don't think anyone will really care in the long run.
Many developers use libraries effectively without knowing every time consideration of O(n) comes into play.
Competently implemented, in the right context, LLMs can be an effective form of abstraction.
Seriously comments like yours assume, that all the rest of us who DO make extensive use of these AI tools and have also been around the block for a while, are idiots.
Sir, you have just passed vibe coding exam. Certified Vibe Coder printout is in the making but AI has difficulty finding a printer. /s
Either way, LLMs are actually high up the quality spectrum as they generate a very consistent style of code for everyone. Which gives it uniformity, that is good when other developers have to read and troubleshoot code.
I can imagine that LLM is really helpful in some cases for some people. But so far, I couldn’t find a single example when I and simple copy-pasting wouldn’t have been faster. Not even when I tried it, not when others showed me how to use it.
This definition limits the number of problems you can solve this way. It basically means buildup of the technical debt - good enough for throwaway code, unacceptable for long term strategy (growth killer for scale-ups).
>Either way, LLMs are actually high up the quality spectrum
This is not what I saw, it’s certainly not great. But that may depend on stack.
I think if you tried to start people off on the kinds of things we started off on in the 80's, you'd never get past the first lesson. It's all so much more complex that any student would (rightly!) give up before getting anywhere.
Unless they develop entirely new technology they're stuck with linear growth of output capability for input costs. This will take a very long time. I expect it to be abandoned in favor of better ideas and computing interfaces. "AI" always seems to bloom right before a major shift in computing device capability and mobility and then gets left behind. I don't see anything special about this iteration.
> that I don't think anyone will really care in the long run.
There are trillions of dollars at stake and access to even the basics of this technology is far from egalitarian or well distributed. Until it is I would expect people who's futures and personal wealth depends on it to care quite a bit. In the meanwhile you might just accelerate yourself into a lawsuit.
Like how McDonalds makes a lot of burgers fast and they are very successful so that's all we really care about?
By the time the AI is actually writing code, I've already had it do a robust architecture evaluation and review which it documents in a development plan. I review that development plan just like I'd review another engineers dev plan. It's pretty hard for it to write objectively bad code after that step.
Also, my day to day work is in an existing code base. Nearly every feature I build has existing patterns or reference code. LLMs do extremely well when you tell them "Build X feature. [some class] provides a similar implementation. Review that before starting." If I think something needs to be DRY'd up or refactored, I ask it to do that.
I've found LLMs tend to struggle getting a codebase from 0 to 1. They tend to swap between major approaches somewhat arbitrarily.
In an existing code base, it's very easy to ground them in examples and pattern matching.
If you merge a ball of generated crap into `main`, I don't so much have to wonder if you would have done a better job by hand.
> What would happen if the "AI" and web search didn't return anything? Would you have stuck with your implementation?
I was fairly certain there must exist some type of algorithm exactly for this purpose. I would have been flabbergasted if I couldn’t find something on the web. But it that failed, I would have asked friends and cracked open the algorithms textbooks.
> I'm not against use of "AI". But this increasing refusal of those who aspire to work in specialist domains like software development to systematically learn things is not great. That's just compounding on an already diminished capacity to process information skillfully.
I understand what you mean, and agree with you. I can also assure you that that is not how I use it.
I think of LLMs as an autocomplete of the web plus hallucinations. Sometimes it’s faster to use the LLM initially rather than scour through a bunch of sites first.
Just read the docs and assume the library works as promised.
To clarify, the LLM did not tell me about the specific library I used. I found it the old fashioned way.
That's where I'm confused. I've been coding for more than 20 years, and every task I ever did was different from the other ones. What kind of task do you do a million times before realizing that you should script it in bash or Python?
Watch out, you’re giving your game away.
My job is about enabling analysis that was previously done ad hoc and informally. If I’m harming people then that’s something I have to take responsibility for, but it’s also caused not by my direct contribution but by the larger system that I’m working within.
I expressively don’t want to automate away work when that will just result in more profit for private owners and less income for regular working people.[1] And I also don’t want to automate work if that means shifting drudgery to some worker to fill in that freed up time.
And how does this contradict what “we” are doing and stand for!? We criticize technology on this board all the time!
But it’s nice to have the priorities of such a prominent member on the record.
[1] But I DO want to automate work in the hypothetical society where we all own the automation and thus the only thing we are deprived of is drudgery.