That way my 'thinker' is satiated and also challenged - Did the solution that my thinker came up with solve the problem better than the plan that the agent wrote?
Then either I acknowledge that the agent's solution was better, giving my thinker something to chew on for the next time; or my solution is better which gives the thinker a dopamine hit and gives me better code.
Why solve a problem when you can import library / scale up / use managed kuberneted / etc.
The menu is great and the number of problems needing deep thought seems rare.
There might be deep thought problems on the requirements side of things but less often on the technical side.
In fact, since I don't need to do low-thinking tasks like writing boilerplate or repetitive tests, I find my thinking ratio is actually higher than when I write code normally.
The author says “ Even though the AI almost certainly won't come up with a 100% satisfying solution, the 70% solution it achieves usually hits the “good enough” mark.”
The key is to keep pushing until it gets to the 100% mark. That last 30% takes multiples longer than the first 70%, but that is where the satisfaction lies for me.
I'm more spent than before where I would spend 2 hours wrestling with tailwind classes, or testing API endpoints manually by typing json shapes myself.
At that point an idea popped in my mind and I decided to look for similar patterns in the codebase, related to the change, found 3. 1 was a non bug, two were latent bugs.
Shipped a fix plus 2 fixes for bugs yet to be discovered.
This I can’t relate to. For me it’s “the better I build, the better”. Building poor code fast isn’t good: it’s just creating debt to deal with in the future, or admitting I’ll toss out the quickly built thing since it won’t have longevity. When quality comes into play (not just “passed the tests”, but is something maintainable, extensible, etc), it’s hard to not employ the Thinker side along with the Builder. They aren’t necessarily mutually exclusive.
Then again, I work on things that are expected to last quite a while and aren’t disposable MVPs or side projects. I suppose if you don’t have that longevity mindset it’s easy to slip into Build-not-Think mode.
That said architectural problems have been also been less difficult, just for the simple fact that research and prototyping has become faster and cheaper.
7 months later waffling on it on and off with and without ai I finally cracked it.
Author is not wrong though, the number of times i hit this isnt as often since ai. I do miss the feeling though
I find the best uses, for at least my self, are smaller parts of my workflow where I'm not going to learn anything from doing it: - build one to throw away: give me a quick prototype to get stakeholder feedback - straightforward helper functions: I have the design and parameters planned, just need an implementation that I can review - tab-completion code-gen - If I want leads for looking into something (libraries, tools) and Googling isn't cutting it
https://mastodon.ar.al/@aral/114160190826192080
"Coding is like taking a lump of clay and slowly working it into the thing you want it to become. It is this process, and your intimacy with the medium and the materials you’re shaping, that teaches you about what you’re making – its qualities, tolerances, and limits – even as you make it. You know the least about what you’re making the moment before you actually start making it. That’s when you think you know what you want to make. The process, which is an iterative one, is what leads you towards understanding what you actually want to make, whether you were aware of it or not at the beginning. Design is not merely about solving problems; it’s about discovering what the right problem to solve is and then solving it. Too often we fail not because we didn’t solve a problem well but because we solved the wrong problem.
When you skip the process of creation you trade the thing you could have learned to make for the simulacrum of the thing you thought you wanted to make. Being handed a baked and glazed artefact that approximates what you thought you wanted to make removes the very human element of discovery and learning that’s at the heart of any authentic practice of creation. Where you know everything about the thing you shaped into being from when it was just a lump of clay, you know nothing about the image of the thing you received for your penny from the vending machine."
I tried this with physics and philosophy. I think i want to do a mix of hard but meaningful. For academic fields like that its impossible for a regular person to do as a hobby. Might as well just do puzzles or something.
Just don't use it. That's always an option. Perhaps your builder doesn't actually benefit from an unlimited runway detached from the cost of effort.
When I'm just programming, I spend a lot more time working through a single idea, or a single function. Its much less tiring.
The point they are making is that using AI tools makes it a lot harder for them to keep up the discipline to think hard.
This may or may not be true for everyone.
My observation: I've always had that "sound." I don't know or care much about what that implies. I will admit I'm now deliberately avoiding em dashs, whereas I was once an enthusiastic user of them.
If you're looking for a domain where the 70% AI solution is a total failure, that's the field. You can't rely on vibe coding because the underlying math, like Learning With Errors (LWE) or supersingular isogeny graphs, is conceptually dense and hasn't been commoditized into AI training data yet. It requires that same 'several-day-soak' thinking you loved in physics, specifically because we're trying to build systems that remain secure even against an adversary with a quantum computer. It’s one of the few areas left where the Thinker isn't just a luxury, but a hard requirement for the Builder to even begin.
Please read up on his life. Mainlander is the most extreme/radical Philosophical Pessimist of them all. He wrote a whole book about how you should rationally kill yourself and then he killed himself shortly after.
https://en.wikipedia.org/wiki/Philipp_Mainl%C3%A4nder
https://dokumen.pub/the-philosophy-of-redemption-die-philoso...
Max Stirner and Mainlander would have been friends and are kindred spirits philosophically.
https://en.wikipedia.org/wiki/Bibliography_of_philosophical_...
> By “thinking hard,” I mean encountering a specific, difficult problem and spending multiple days just sitting with it to overcome it.
The "thinking hard" I do with an LLM is more like management thinking. Its chaotic and full of conversations and context switches. Its tiring, sure. But I'm not spending multiple days contemplating a single idea.
The "thinking hard" I do over multiple days with a single problem is more like that of a scientist / mathematician. I find myself still thinking about my problem while I'm lying in bed that night. I'm contemplating it in the shower. I have little breakthroughs and setbacks, until I eventually crack it or give up.
Its different.
And also, I haven't started using AI for writing code yet. I'm shuffling toward that, with much trepidation. I ask it lots of coding questions. I make it teach me stuff. Which brings me to the point of my post:
The other day, I was looking at some Rust code and trying to work out the ownership rules. In theory, I more or less understand them. In practice, not so much. So I had Claude start quizzing me. Claude was a pretty brutal teacher -- he'd ask 4 or 5 questions, most of them solvable from what I knew already, and then 1 or 2 that introduced a new concept that I hadn't seen. I would get that one wrong and ask for another quiz. Same thing: 4 or 5 questions, using what I knew plus the thing just introduced, plus 1 or 2 with a new wrinkle.
I don't think I got 100% on any of the quizzes. Maybe the last one; I should dig up that chat and see. But I learned a ton, and had to think really hard.
Somehow, I doubt this technique will be popular. But my experience with it was very good. I recommend it. (It does make me a little nervous that whenever I work with Claude on things that I'm more familiar with, he's always a little off base on some part of it. Since this was stuff I didn't know, he could have been feeding me slop. But I don't think so; the explanations made sense and the the compiler agreed, so it'd be tough to get anything completely wrong. And I was thinking through all of it; usually the bullshit slips in stealthily in the parts that don't seem to matter, but I had to work through everything.)
While this may be an unfair generalization, and apologies to those who don't feel this way, but I believe STEM types like the OP are used to problem solving that's linear in the sense that the problem only exists in its field as something to be solved, and once they figure it out, they're done. The OP even described his mentality as that of a "Thinker" where he received a problem during his schooling, mulled over it for a long time, and eventually came to the answer. That's it, next problem to crack. Their whole lives revolve around this process and most have never considered anything outside it.
Even now, despite my own healthy skepticism of and distaste for AI, I am forced to respect that AI can do some things very fast. People like the OP, used to chiseling away at a problem for days, weeks, months, etc., now have that throughput time slashed. They're used to the notion of thinking long and hard about a very specific problem and finally having some output; now, code modules that are "good enough" can be cooked up in a few minutes, and if the module works the problem is solved and they need to find the next problem.
I think this is more common than most people want to admit, going back to grumblings of "gluing libraries together" being unsatisfying. The only suggestion I have for the OP is to expand what you think about. There are other comments in this thread supporting it but I think a sea change that AI is starting to bring for software folks is that we get to put more time towards enhancing module design, user experience, resolving tech debt, and so on. People being the ones writing code is still very important.
I think there's more to talk about where I do share the OP's yearning and fears (i.e., people who weren't voracious readers or English/literary majors being oneshot by the devil that is AI summaries, AI-assisted reading, etc.) but that's another story for another time.
And then also there’s all the non-systems stuff - what is actually feasible, what’s most valuable etc. Less “fun”, but still lots of potential for thinking.
I guess my main point is there is still lots to think about even post-LLM, but the real challenge is making it as “fun” or as easily useful as it was pre-LLM.
I think local code architecture was a very easy domain for “optimality” that is actually tractable and the joy that comes with it, and LLMs are harmful to that, but I don’t think there’s nothing to replace it with.
Except without the reward of an intellectual high afterwards.
I think just as hard, I type less. I specify precisely and I review.
If anything, all we've changed is working at a higher level. The product is the same.
But these people just keep mixing things up like "wow I got a ferrari now, watch it fly off the road!"
Yeah so you got a tools upgrade; it's faster, it's more powerful. Keep it on the road or give up driving!
We went from auto completing keywords, to auto completing symbols, to auto completing statements, to auto completing paragraphs, to auto completing entire features.
Because it happened so fast, people feel the need to rename programming every week. We either vibe coders now, or agentic coders or ... or just programmers hey. You know why? I write in C, I get machine code, I didn't write the machine code! It was all an abstraction!
Oh but it's not the same you say, it changes every time you ask. Yes, for now, it's still wonky and janky in places. It's just a stepping stone.
Just chill, it's programming. The tools just got even better.
You can still jump on a camel and cross the desert in 3 days. Have at it, you risk dying, but enjoy. Or you can just rent a helicopter and fly over the damn thing in a few hours. Your choice. Don't let people tell you it isn't travelling.
We're all Linus Torvalds now. We review, we merge, we send back. And if you had no idea what you were doing before, you'll still have no idea what you're doing today. You just fat-finger less typos today than ever before.
There's an upside to this sort of effort too, though. You actually need to make it crystal clear what your idea is and what it is not, because of the continuous pushback from the agentic programming tool. The moment you stop pushing back, is the moment the LLM rolls over your project and more than likely destroys what was unique about your thing in the first place.
thinking is tiring and life is complicated, the tool makes it easy to slip into bad habits and bad habits are hard to break even when you recognise its a bad habit.
Many people are too busy/lazy/self-unaware to evaluate their behaviour to recognise a bad habit.
These people are miserable to work with if you need things done quickly and can tolerate even slight imperfection.
That operating regime is, incidentally, 95% of the work we actually get paid to do.
Most examples mentioned of “thinking hard” in the comments sound like they think about a lot of stuff superficially instead one particular problem deeply, which is what OP is referring to.
Reads the SQLite db and shit. So burn your tokens on that.
I found that doing more physical projects helped me. Large woodworking, home improvement, projects. Built-in bookshelves, a huge butcher block bar top (with 24+ hours of mindlessly sanding), rolling workbenches, and lots of cabinets. Learning and trying to master a new skill, using new design software, filling the garage with tools...
I don't think AI has affected my thinking much, but that's because I probably don't know how to use it well. Whenever AI writes a lot of code, I end up having to understand if not change most of it; either because I don't trust the AI, I have to change the specification (and either it's a small change or I don't trust the AI to rewrite), the code has a leaky abstraction, the specification was wrong, the code has a bug, the code looks like it has a bug (but the problem ends up somewhere else), I'm looking for a bug, etc. Although more and more often the AI saves time and thinking vs. if I wrote the implementation myself, it doesn't prevent me from having to think about the code at all and treating it like a black box, due to the above.
I use AI for the easy stuff.
I find myself being able to reach for the things that my normal pragmatist code monkey self would consider out of scope - these are often not user facing things at all but things that absolutely improve code maintenance, scalability, testing/testability, or reduce side effects.
I have to think more rigorously. I have to find ways to tie up loose ends, to verify the result efficiently, to create efficient feedback loops and define categorical success criteria.
I've thought harder about problems this last year than I have in a long time.
A few years before this wave of AI hit, I got promoted into a tech lead/architect role. All of my mental growth since then has been learning to navigate office politics and getting the 10k ft view way more often.
I was already telling myself "I miss thinking hard" years before this promotion. When I build stuff now, I do it with a much clearer purpose. I have sincerely tried the new tools, but I'm back to just using google search if anything at all.
All I did was prove to myself the bottleneck was never writing code, but deciding why I'm doing anything at all. If you want to think so hard you stay awake at night, try existential dread. It's an important developmental milestone you'd have been forced to confront anyway even 1000 years ago.
My point is, you might want to reconsider how much you blame AI.
Just look at image generation. Actually factually look at it. We went from horror colours vomit with eyes all over, to 6 fingers humans, to pretty darn good now.
It's only time.
As I'm providing context I get to think about what an ideal approach would look like and often dive into a research session to analyze pros and cons of various solutions.
I don't use agents much because it's important to see how a component I just designed fits into the larger codebase. That experience provides insights on what improvements I need to make and what to build next.
The time I've spent thinking about the composability, cohesiveness, and ergonomics of the code itself have really paid off. The codebase is a joy to work in, easy to maintain and extend.
The LLMs have helped me focus my cognitive bandwidth on the quality and architecture instead of the tedious and time consuming parts.
So...where's your OS and SCM?
I get your point that wetware stills matter, but I think it's a bit much to contend that more than a handful of people (or everyone) is on the level of Linus Torvalds now that we have LLMs.
... OK I guess. I mean sorry but if that's revelation to you, that by using a skill less you hone it less, you were clearly NOT thinking hard BEFORE you started using AI. It sure didn't help but the problem didn't start then.
It's hard to rationalise this as billable time, but they pay for outcome even if they act like they pay for 9-5 and so if I'm thinking why I like a particular abstraction, or see analogies to another problem, or begin to construct dialogues with mysel(ves|f) about this, and it happens I'm scrubbing my back (or worse) I kind of "go with the flow" so to speak.
Definitely thinking about the problem can be a lot better than actually having to produce it.
I didn't imply most of use can do half the thing he's done. That's not right.
These are also tasks the AI can succeed at rather trivially.
Better completions is not as sexy, but in pretending agents are great engineers it's an amazing feature often glossed over.
Another example is automatic test generation or early correctness warnings. If the AI can suggest a basic test and I can add it with the push of a button - great. The length (and thus complexity) of tests can be configured conservatively relative to the AI of the day. Warnings can just be flags in the editors spotting obvious mistakes. Off-by-one errors for example, which might go unnoticed for a while, would be an achievable and valuable notice.
Also, automatic debugging and feeding the raw debugger log into an AI to parse seems promising, but I've done little of it.
...And go from there - if a well-crafted codebase and an advanced model using it as context can generate short functions well, then by all means - scale that up with discretion.
These problems around the AI coding tools are not at all special - it's a classic case of taking the new tool too far too fast.
I just changed employers recently in part due to this: dealing with someone that appears to now spend his time coercing LLM's to give the answers he wants, and becoming deaf to any contradictions. LLMs are very effective at amplifying the Reality Distortion Field for those that live in them. LLMs are replacing blog posts for this purpose.
We miss thinking "hard" about the small details. Maybe "hard" isn't the right adjective, but we all know the process of coding isn't just typing stuff while the mind wanders. We keep thinking about the code we're typing and the interactions between the new code and the existing stuff, and keep thinking about potential bugs and issues. (This may or may not be "hard".)
And this kind of thinking is totally different from what Linus Torvalds has to think about when reviewing a huge patch from a fellow maintainer. Linus' work is probably "harder", but it's a different kind of thinking.
You're totally right it's just tools improving. When compilers improved most people were happy, but some people who loved hand crafting asm kept doing it as a hobby. But in 99+% cases hand crafting asm is a detriment to the project even if it's fun, so if you love writing asm yourself you're either out of work, or you grudgingly accept that you might have to write Java to get paid. I think there's a place for lamenting this kind of situation.
But even then...don't you think his insight into and ability to verify a PR far exceeds that of most devs (LLM or not)? Most of us cannot (reasonably) aspire to be like him.
[1]: https://www.jocrf.org/how-clients-use-the-analytical-reasoni...
You now have a bicycle which gets you there in a third of the time
You need to find destinations that are 3x as far away than before
Agentic coding in general only amplify your ability (or disability).
You can totally learn how to build an OS and invest 5 years of your life doing so. The first version of Linux I'm sure was pretty shoddy. Same for a SCM.
I've been doing this for 30 years. At some point, your limit becomes how much time you're willing to invest in something.
Why blame these tools if you can stop using them, and they won't have any effect on you?
In my case, my problem was often overthinking before starting to build anything. Vibe coding rescued me from that cycle. Just a few days ago, I used openclaw to build and launch a complete product via a Telegram chat. Now, I can act immediately rather than just recording an idea and potentially getting to it "someday later"
To me, that's evolutional. I am truly grateful for the advancement of AI technology and this new era. Ultimately, it is a tool you can choose to use or not, rather than something that prevents you from thinking more.
1. Take a pen and paper.
2. Write down what we know.
3. Write down where we want to go.
4. Write down our methods of moving forward.
5. Make changes to 2, using 4, and see if we are getting closer to 3. And course correct based on that.
I still do it a lot. LLM's act as assist. Not as a wholesale replacement.
What?
That's not an upside in that it's unique to LLM vs human written code. When writing it yourself, you also need to make it crystal clear. You do that in the language of implementation.
But that approach doesn't work with code, or with reasoning in general, because you would need to exponentially fine tune everything in the universe. The illusion that the AI "understands" what it is doing is lost.
People felt (wrongly) that traditional representational forms like portraiture were threatened by photography. Happily, instead of killing any existing genres, we got some interesting new ones.
Can't speak to firmware code or complex cryptography but my hunch is if it's in it's training dataset and you know enough to guide it, it's generally pretty useful.
If you use LLMs at very high temperature with samplers which correctly keep your writing coherent (i.e. Min_p, or better like top-h, P-less decoding, etc), than "regression to the mean" literally DOES NOT HAPPEN!!!!
Seen a lot of DIY vibe coded solutions on this site and they are just waiting for a security disaster. Moltbook being a notable example.
That was just the beginning.
This is a non sequitur. Cameras have not replaced paintings, assuming this is the inference. Instead, they serve only to be an additional medium for the same concerns quoted:
The process, which is an iterative one, is what leads you
towards understanding what you actually want to make,
whether you were aware of it or not at the beginning.
Just as this is applicable to refining a software solution captured in code, just as a painter discards unsatisfactory paintings and tries again, so too is it when people say, "that picture didn't come out the way I like, let's take another one."It's as if I woke up in a world where half of resturaunts worldwide started changing their name to McDonalds and gaslighting all their customers into thinking McDonalds is better than their "from scratch" menu.
Just dont use these agentic tools, they legitimately are weapons who's target is your brain. You can ship just as fast with autocomplete and decent workflows, and you know it.
Its weird, I dont understand why any self respecting dev would support these companies. They are openly hostile about their plans for the software industry (and many other verticles).
I see it as a weapon being used by a sect of the ruling class to diminsh the value of labor. While im not confident they'll be successful, I'm very disappointed in my peers that are cheering them on in that mission. My peers are obviously being tricked by promises of being able join that class, but that's not what's going to happen.
You're going to lose that thinking muscle and therefor the value of your labor is going to be directly correlated to the quantity and quality of tokens you can afford (or be given, loaned!?)
Be wary!!!
You wouldn't have known that, going by all the bellyaching and whining from the artists of the day.
Guess what, they got over it. You will too.
It's like saying I miss running. Get out and run then.
Same problem with image generation (lack of support for different SDE solvers, the image version of LLM sampling) but they have different "coomer" tools, i.e. ComfyUI or Automatic1111
Presumably humanity still has room to grow and not everything is already in the training set.
Prediction is difficult, especially of the future.
> You wouldn't have known that, going by all the bellyaching and whining from the artists of the day.
> Guess what, they got over it.
You conveniently omitted my next sentence, which contradicts your position and reads thusly:
Instead, they serve only to be an additional medium for the
same concerns quoted ...
> You will too.This statement is assumptive and gratuitous.
Shop bread and tomatoes though can be manufactured without any thought of who makes them, though they can be reliably manufactured without someone guiding an LLM which is perhaps where the analogy falls down, and we always want them to be the same, but software is different in every form.
As an industry, we've been preaching the benefits of running lots of small experiments to see what works vs what doesn't, try out different approaches to implementing features, and so on. Pre-AI, lots of these ideas never got implemented because they'd take too much time for no definitive benefit.
You might spend hours thinking up cool/interesting ideas, but not have the time available to try them out.
Now, I can quickly kick off a coding agent to try out any hare-brained ideas I might come up with. The cost of doing so is very low (in terms of time and $$$), so I get to try out far more and weirder approaches than before when the costs were higher. If those ideas don't play out, fine, but I have a good enough success rate with left-field ideas to make it far more justifiable than before.
Also, it makes playing around with one-person projects a lot practical. Like most people with partner & kids, my down time is pretty precious, and tends to come in small chunks that are largely unplannable. For example, last night I spent 10 minutes waiting in a drive-through queue - that gave me about 8 minutes to kick off the next chunk of my one-person project development via my phone, review the results, then kick off the next chunk of development. Absolutely useful to me personally, whereas last year I would've simply sat there annoyed waiting to be serviced.
I know some people have an "outsourcing Lego" type mentality when it comes to AI coding - it's like buying a cool Lego kit, then watching someone else assemble it for you, removing 99% of the enjoyment in the process. I get that, but I prefer to think of it in terms of being able to achieve orders of magnitude more in the time I have available, at close to zero extra cost.
Did you imagine yourself then, as your are now, hunched over a glowing rectangle. Demanding imperiously that the world share your contempt for the sublime. Share your jaundiced view of those that pour the whole of themselves into the act of creation, so that everyone might once again be graced with wonder anew.
I hope you can find a work of art that breaks you free of your resentment.
Except for eating and sleeping, all other human activities are fake now.
We can play a peaceful game and a intense one.
Now, when we think, we can always find a right level of abstract to think on. Decades ago a programmer thought with machine codes, now we think with high level concepts, maybe towards philosophy.
A good outcome always requires hard thinking. We can and we WILL think hard at a appropriate level.
In grad school, I had what I'd call the classic version. I stayed up all night mentally working on a topology question about turning a 2-torus inside out. I already knew you can't flip a torus inside out in ordinary R^3 without self-intersection. So I kept moving and stretching the torus and the surrounding space in my head, trying to understand where the obstruction actually lived.
Sometime around sunrise, it clicked that if you allow the move to go through infinity(so effectively S^3), the inside/outside distinction I was relying on just collapses, and the obstruction I was visualizing dissolves. Birds were chirping, I hadn't slept, and nothing useful came out of it, but my internal model of space felt permanently upgraded. That's clearly "thinking hard" in the sense.
But there's another mode I've experienced that feels related but different. With a tough Code Golf problem, I might carry it around for a week. I'm not actively grinding on it the whole time, but the problem stays loaded in the background. Then suddenly, in the shower or on a walk, a compression trick or a different representation just clicks.
That doesn't feel "hard" moment to moment. It's more like keeping a problem resident in memory long enough for the right structure to surface.
One is concentrated and exhausting, the other is diffuse and slow-burning. They're different phenomenologically, but both feel like forms of deep engagement that are easy to crowd out.
And being a reasonable person I, just like the author, choose the helicopter. That's it, that's the whole problem.
You just detailed an example of where you did in fact reduce your thinking.
Managers who tell people what to get done do not think about the problem.
It's like we had the means for production and more or less collectively decided "You know what? Actually, the bourgeoisie can have it, sure."
It would be a lot more interesting to point out the differences and similarities yourself. But then if you wanted an interesting discussion you wouldn’t be posting trite flamebait in the first place, would you?
I've resigned to mostly using it for "tip-of-my-tongue" style queries, i.e. "where do I look in the docs". Especially for Apple platforms where almost nothing is documented except for random WWDC video tutorials that lack associated text articles.
I don't trust LLMs at all. Everything they make, I end up rewriting from scratch anyway, because it's always garbage. Even when they give me ideas, they can't apply them properly. They have no standards, no principle. It's all just slop.
I hate this. I hate it because LLMs give so many others the impression of greatness, of speed, and of huge productivity gains. I must look like some grumpy hermit, stuck in their ways. But I just can't get over how LLMs all give me the major ick. Everything that comes out of them feels awful.
My standards must be unreasonably high. Extremely, unsustainably high. That must also be the reason I hardly finish any projects I've ever started, and why I can never seem to hit any deadlines at work. LLMs just can't reach my exacting, uncompromising standards. I'm surely expecting far too much of them. Far too much.
I guess I'll just keep doing it all myself. Anything else really just doesn't sit right.
I don’t think you can get the same satisfaction out of these tools if what you want to do is not novel.
If you are exploring the space of possibilities for which there are no clear solutions, then you have to think hard. Take on wildly more ambitious projects. Try to do something you don’t think you can do. And work with them to get there.
I too am an ex-physcist used to spending days thinking about things, but programming is a gold mine as it is adjacent to computer science. You can design a programming language (or improve an existing one), try to build a better database (or improve an existing one), or many other things that are quite hard.
The LLM is a good rubber duck for exploring the boundaries of human knowledge (or at least knowledge common enough to be in its training set). It can't really "research" on its own, and whenever you suggest something novel and plausable it gets sycophantic, but it can help you prototype ideas and implementation strategies quite fast, and it can help you explore how existing software works and tackles similar problems (or help you start working on an existing project).
I've found that the best way to actually think hard about something is to write about it, or to test yourself on it. Not re-read it. Not highlight it. Generate questions from the material and try to answer them from memory.
The research on active recall vs passive review is pretty clear: retrieval practice produces dramatically better long-term retention than re-reading. Karpicke & Blunt (2011) showed that practice testing outperformed even elaborative concept mapping.
So the question isn't whether AI summarizers are good or bad -- it's whether you use them as a crutch to avoid thinking, or as a tool to compress the boring parts so you can spend more time on the genuinely hard thinking.
The current major problem with the software industry isn't quantity, it's quality; and AI just increases the former while decreasing the latter. Instead of e.g. finding ways to reduce boilerplate, people are just using AI to generate more of it.
I got excited about agents because I told myself it would be "just faster typing". I told myself that my value was never as a typist and that this is just the latest tool like all the tools I had eagerly added to my kit before.
But the reality is different. It's not just typing for me. It's coming up with crap. Filling in the blanks. Guessing.
The huge problem with all these tools is they don't know what they know and what they don't. So when they don't know they just guess. It's absolutely infuriating.
It's not like a Ferrari. A Ferrari does exactly what I tell it to, up to the first-order effects of how open the throttle is, what direction the wheels face, how much pressure is on the brakes etc. The second-order effects are on me, though. I have to understand what effect these pressures will have on my ultimate position on the road. A normie car doesn't give you as much control but it's less likely to come off the road.
Agents are like a teleport. You describe where you want to be and it just takes you directly there. You say "warm and sunny" and you might get to the Bahamas, but you might also get to the Sahara. So you correct: "oh no, I meant somewhere nice" and maybe you get to the Bahamas. But because you didn't travel there yourself you failed to realise what you actually got. Yeah, it's warm, sunny and nice, but now you're on an island in the middle of nowhere and have to import basically everything. So I prompt again and rewrite the entire codebase, right?
Linus Torvalds works with experts that he trusts. This is like a manic 5 year old that doesn't care but is eager to work. Saying we all get to be Torvalds is like saying we all get to experience true love because we have access to porn.
Thoughtful retorts such as this are deserving of the same esteem one affords the "rubber v glue"[0] idiom.
As such, I must oblige.
0 - https://idioms.thefreedictionary.com/I%27m+rubber%2c+you%27r...
I feel the existential problem for a world that follows the religion of science and technology to its extreme, is that most people in STEM have no foundation in humanities, so ethical and philosophical concerns never pass through their mind.
We have signed a pact with the devil to help us through boring tasks, and no one thought to ask what we would give in exchange.
You’re taking a bunch of pre-built abstractions written by other people on top of what the computer is actually doing and plugging them together like LEGOs. The artificial syntax that you use to move the bricks around is the thing you call coding.
The human element of discovery is still there if a robot stacks the bricks based on a different set of syntax (Natural Language), nothing about that precludes authenticity or the human element of creation.
This actually leaves me with a lot more time to think, about what I want the UI to look like, how I'll market my software, and so on.
I too did a lot of AI coding but when I saw the spaghetti it made, I went back to regular coding, with ask mode not agent mode as a search engine.
I recently used the analogy of when compilers were invented. Old-school coders wrote machine code, and handled the intricacies of memory and storage and everything themselves. Then compilers took over, we all moved up an abstraction layer and started using high-level languages to code in. There was a generation of programmers who hated compilers because they wrote bad, inelegant, inefficient, programs. And for years they were right.
The hard problems now are "how can I get a set of non-deterministic, fault-prone, LLM agents to build this feature or product with as few errors as possible, with as little oversight as possible?". There's a few generic solutions, a few good approaches coming out, but plenty of scope for some hard thought in there. And a generic approach may not work for your specific project.
And it's also somewhat egotistical it seems to me. I sense a pattern that many developers care more about doing what they want instead of providing value to others.
Thinking hard has never been easier.
I think AI for an autodidact is a boon. Now I suddenly have a teacher who is always accessible and will teach me whatever I want for as long as I want exactly the way I want and I don;t have to worry about my social anxiety kicking in.
Learn advanced cryptography? AI, figure out formal verification - AI etc.
Except Linus understands the code that is being reviewed / merged in since he already built the kernel and git by hand. You only see him vibe-coding toys but not vibe-coding in the kernel.
Today, we are going to see a gradual skill atrophy with developers over-relying on AI and once something like Claude goes down, they can't do any work at all.
The most accurate representation is that AI is going to rapidly make lots of so-called 'senior engineers' who are over-reliant and unable to detect bad AI code like juniors and interns.
1. I received the ticket, as soon as I read it I had a hunch it was related to some querying ignoring a field that should be filtered by every query (thinking)
2. I give this hunch to the AI which goes search in the codebase in the areas I suggested the problem could be and that's when it find the issue and provide a fix
3. I think the problem could be spread given there is a method that removes the query filter, it could have been used in multiple places, so I ask AI to find other usages of it (thinking, this is my definition of "steering" in this context)
4. AI reports 3 more occurrences and suggests that 2 have the same bug, but one is ok
5. I go in, review the code and understand it and I agree, it doesn't have the bug (thinking)
6. AI provide the fix for all the right spots, but I said "wait, something is fishy here, there is a commit that explicitly say it was added to remove the filter, why is that?" (thinking), so I ask AI to figure out why the commit says that
7. AI proceeds to run a bunch of git-history related commands, finds some commit and then does some correlation to find another commit. This other commit introduced the change at the same time to defend from a bug in a different place
8. I understand what's going on now, I'm happy with the fix, the history suggests I am not breaking stuff. I ask AI to write a commit with detailed information about the bug and the fix based on the conversation
There is a lot of thinking involved. What's reduced is search tooling. I can be way more fuzzy, rather than `rg 'whatever'` I now say "find this and similar patterns"Maybe I subconsciously picked these up because my Thinker side was starved for attention. Nice post.
People are paying for it because it helps them. Who are you to whine about it?
FSD is very very good most of the time. It's so good (well, v14 is, anyway), it makes it easy to get lulled into thinking that it works all the time. So you check your watch here, check your phone there, and attend to other things, and it's all good until the car decides to turn into a curb (which almost happened to me the other day) or swerve hard into a tree (which happened to someone else).
Funny enough, much like AI, Tesla is shoving FSD down people's throats by gating Autopilot 2, a lane keeping solution that worked extremely well and is much friendlier to people who want limited autonomy here and there, behind the $99/mo FSD sub (and removing the option to pay for the package out of pocket).
Personally, I am going deeper in Quantum Computing, hoping that this field will require thinkers for a long time.
for "Thinker" brain food. (it still has the issue of not being a pragmatic use of time, but there are plenty interesting enough questions which it at least helps)
I thought "on-shoring" is already commonly used for the process that undos off-shoring.
For me, Claude, Suno, Gemini and AI tools are pure bliss for creation, because they eliminate the boring grunt work. Who cares how to implement OAuth login flow, or anything that has been done 1000 times?
I do not miss doing grunt work!
The problem is rather that programmers who work on business logic often hate programmers who are actually capable of seeing (often mathematical) patterns in the business logic that could be abstracted away; in other words: many business logic programmers hate abstract mathematical stuff.
So, in my opinion/experience this is a very self-inflected problem that arises from the whole culture around business logic and business logic programming.
This rather tells that the kind of performance optimizations that you ask for are very "standard".
I suspect those using the tools in the best way are thinking harder than ever for this reason.
Code generation progression in LLMs still carries higher objective risk of failure depending on the experience on the person using it because:
1. They still do not trust if the code works (even if it has tests) thus, needs thorough human supervision and still requires on-going maintainance.
2. Hence (2) it can cost you more money than the tokens you spent building it in the first place when it goes horribly wrong in production.
Image generation progression comes with close to no operational impact, and has far less human supervision and can be safely done with none.
More importantly, thinking and building are two very different modes of operating and it can be hard to switch at moment's notice. I've definitely noticed myself getting stuck in "non-thinking building/fixing mode" at times, only realizing that I've been making steady progress into the wrong direction an hour or two in.
This happens way less with LLMs, as they provide natural time to think while they churn away at doing.
Even when thinking, they can help: They're infinitely patient rubber ducks, and they often press all the right buttons of "somebody being wrong on the Internet" too, which can help engineers that thrive in these kinds of verbal pro/contra discussions.
LLMs don’t “reason” the same way humans do. They follow text predictions based on statistical relevance. So raising the temperature will more likely increase the likelihood of unexecutable pseudocode than it would create a valid but more esoteric implementation of a problem.
But if he enjoyed being in the forest, and _doesn't really care about lumber at all_ (Because it turns out, he never used or liked lumber, he merely produced it for his employer) then these screens won't give him any joy at all.
That's how I feel. I don't care about code, but I also don't really care about products. I mostly care about the craft. It's like solving sudokus. I don't collect solved sudokus. Once solved I don't care about them. Having a robot solve sudokus for me would be completely pointless.
> I sense a pattern that many developers care more about doing what they want instead of providing value to others.
And you'd be 100% right. I do this work because my employer provides me with enough sudokus. And I provide value back which is more than I'm compensated with. That is: I'm compensated with two things: intellectual challenge, and money. That's the relationship I have with my employer. If I could produce 10x more but I don't get the intellectual challenge? The employer isn't giving me what I want - and I'd stop doing the work.
I think "You do what the employer wants, produce what needs to be produced, and in return you get money" is a simplification that misses the literal forest for all the forestry.
But I feel better for not taking the efficient way. Having to be the one to make a decision at every step of the way, choosing the constraints and where I cut my losses on accuracy, I think has taught me more about the subject than even reading literature would’ve directly stated.
LLMs aren't a "layer of abstraction."
99% of people writing in assembly don't have to drop down into manual cobbling of machine code. People who write in C rarely drop into assembly. Java developers typically treat the JVM as "the computer." In the OSI network stack, developers writing at level 7 (application layer) almost never drop to level 5 (session layer), and virtually no one even bothers to understand the magic at layers 1 & 2. These all represent successful, effective abstractions for developers.
In contrast, unless you believe 99% of "software development" is about to be replaced with "vibe coding", it's off the mark to describe LLMs as a new layer of abstraction.
I find languages like JavaScript promote the idea that of “Lego programming” because you’re encouraged to use a module for everything.
But when you start exploring ideas that haven’t been thoroughly explored already, and particularly in systems languages which are less zealous about DRY (don’t repeat yourself) methodologies, the you can feel a lot more like a sculptor.
Likewise if you’re building frameworks rather than reusing them.
So it really depends on the problems you’re solving.
For general day-to-day coding for your average 9-to-5 software engineering job, I can definitely relate to why people might think coding is basically “LEGO engineering”.
Just a few days ago, I let it do something that I thought was straightforward, but it kept inserting bugs, and after a few hours of interaction it said itself it was running in circles. It took me a day to figure out what the problem was: an invariant I had given it was actually too strong, and needed to be weakened for a special case. If I had done all of it myself, I would have been faster, and discovered this quicker.
For a different task in the same project I used it to achieve a working version of something in a few days that would have taken me at least a week or two to achieve on my own. The result is not efficient enough for the long term, but for now it is good enough to proceed with other things. On the other hand, with just one (painful) week more, I would have coded a proper solution myself.
What I am looking forward to is being able to converse with the AI in terms of a hard logic. That will take care of the straightforward but technically intricate stuff that it cannot do yet properly, and it will also allow the AI to surface much quicker where a "jump of insight" is needed.
I am not sure what all of this means for us needing to think hard. Certainly thinking hard will be necessary for quite a while. I guess it comes down to when the AIs will be able to do these "jumps of insight" themselves, and for how long we can jump higher than they can.
With AI we can set high bars and do complex original stuff. Obviously boilerplate and common patterns are slop slap without much thinking. That's why you branch into new creative territory. The challenge then becomes visualising the mental map of modular pieces all working nicely together at the right time to achieve your original intent.
Not inherently, no. Reading it and getting a cursory understanding is easy, truly understanding what it does well, what it does poorly, what the unintended side effects might be, that's the difficult part.
In real life I've witnessed quite a few intelligent and experienced people who truly believe that they're thinking "really hard" and putting out work that's just as good as their previous, pre-AI work, and they're just not. In my experience it roughly correlates to how much time they think they're saving, those who think they're saving the most time are in fact cutting corners and putting out the sloppiest quality work.
To say it will free people of the boring tasks is so short sighted....
Isn't the analogy apt? You can't make a working car using a lump of clay, just a car statue, a lump of clay is already an abstraction of objects you can make in reality.
There are tips and tricks on how to manage them and not knowing them will bite you later on. Like the basic thing of never asking yes or no questions, because in some cultures saying "no" isn't a thing. They'll rather just default to yes and effectively lie than admit failure.
Correct. However, you will probably notice that your solution to the problem doesn't feel right, when the bricks that are available to you, don't compose well. The AI will just happily smash together bricks and at first glance it might seem that the task is done.
Choosing the right abstraction (bricks) is part of finding the right solution. And understanding that choice often requires exploration and contemplation. AI can't give you that.
On one side, there are people who have become a bit more productive. They are certainly not "10x," but they definitely deliver more code. However, I do not observe a substantial difference in the end-to-end delivery of production-ready software. This might be on me and my lack of capacity to exploit the tools to their full extent. But, iterating over customer requirements, CI/CD, peer reviews, and business validation takes time (and time from the most experienced people, not from the AI).
On the other hand, soemtimes I observe a genuine degradation of thinking among some senior engineers (there aren’t many juniors around, by the way). Meetings, requirements, documents, or technology choices seem to be directly copy/pasted from an LLM, without a grain of original thinking, many times without insight.
The AI tools are great though. They give you an answer to the question. But, many times making the correct question, and knowing when the answer is not correct is the main issue.
I wonder if the productivity boost that senior engineers actually need is to profit from the accumulated knowledge found in books. I know it is an old technology and it is not fashionable, but I believe it is mostly unexploited if you consider the whole population of engineers :D
If anything, we have more intractable problems needing deep creative solutions than ever before. People are dying as I write this. We’ve got mass displacement, poverty, polarization in politics. The education and healthcare systems are broken. Climate change marches on. Not to mention the social consequences of new technologies like AI (including the ones discussed in this post) that frankly no one knows what to do about.
The solution is indeed to work on bigger problems. If you can’t find any, look harder.
I wonder if software creation will be in a similar place. There still might be a small market for handmade software but the majority of it will be mass produced. (That is, by LLM or even software itself will mostly go away and people will get their work done via LLM instead of "apps")
Just don't use AI. The idea that you have ship ship ship 10X ship is an illusion and a fraud. We don't really need more software
You might have missed their point.
This is no different than many things. I could grow a tree and cut it into wood but I don't. I could buy wood and nails and brackets and make furniture but I don't. I instead just fill my house/apartment with stuff already made and still feel like it's mine. I made it. I decided what's in it. I didn't have to make it all from scratch.
For me, lots of programming is the same. I just want to assemble the pieces
> When you skip the process of creation you trade the thing you could have learned to make for the simulacrum of the thing you thought you wanted to make
No, your favorite movie is not crap because the creators didn't grind their own lens. Popular and highly acclaimed games not at crap because they didn't write their own physics engine (Zelda uses Havok) or their own game engine (Plenty of great games use Unreal or Unity)
Sure, I'm doing less technical thinking these days. But all the hard thinking is happening on feature design.
Good feature design is hard for AI. There's a lot of hidden context: customer conversations, unwritten roadmaps, understanding your users and their behaviour, and even an understanding of your existing feature set and how this new one fits in.
It's a different style of thinking, but it is hard, and a new challenge we gotta embrace imo.
What you get right now is mass replicated software, just another copy of sap/office/Spotify/whatever
That software is not made individually for you, you get a copy like millions of other people and there is nearly no market anymore for individual software.
Llms might change that, we have a bunch of internal apps now for small annoying things..
They all have there quirks, but are only accessible internally and make life a little bit easier for people working for us.
Most of them are one shot llms things, throw away if you do not need it anymore or just one shoot again
Very few people (even before LLM coding tools) actually did low level "artisanal" coding; I'd argue the vast majority of software development goes into implementing features in b2b / b2c software, building screens, logins, overviews, detail pages, etc. That requires (required?) software engineers too, and skill / experience / etc, but it was more assembling existing parts and connecting them.
Years ago there was already a feeling that a lot of software development boiled down to taping libraries together.
Or from another perspective, replace "LLM" with "outsourcing".
Or, risking to beat the metaphor to death, because over a span of time I'll cross many more deserts than I would have on a camel, and because I'll cross deserts that I wouldn't even try crossing on a camel.
Maybe you don’t care about the environment (which includes yourself and the people you like), or income inequality, or the continued consolidation of power in the hands of a few deranged rich people, or how your favourite artists (do you have any?) are exploited by the industry, but some of us have been banging the drum about those issues for decades. Just because you’re only noticing it now or don’t care it doesn’t mean it’s a new thing or that everyone else is being duplicitous. It’s a good thing more people are waking up and talking about those.
I'd argue that in most cases it's better to do some research and find out if a tool already exists, and if it isn't exactly how you want it... to get used to it, like one did with all other tools they used.
Counterpoint to my own counterpoint, will anyone actually (want to) read it?
counterpoint to the third degree, to loop it back around, an LLM might and I'd even argue an LLM is better at reading and ingesting long text (I'm thinking architectural documentation etc) than humans are. Speaking for myself, I struggle to read attentively through e.g. a document, I quickly lose interest and scan read or just focus on what I need instead.
If you think too much you get into dead ends and you start having circular thoughts, like when you are lost in the desert and you realise you are in the same place again after two hours as you have made a great circle(because one of your legs is dominant over the other).
The thinker needs feedback on the real world. It needs constant testing of hypothesis on reality or else you are dealing with ideology, not critical thinking. It needs other people and confrontation of ideas so the ideas stay fresh and strong and do not stagnate in isolation and personal biases.
That was the most frustrating thing before AI, a thinker could think very fast, but was limited in testing by the ability to build. Usually she had to delegate it to people that were better builders, or else she had to be builder herself, doing what she hates all the time.
The other day people were talking about metrics, the amount of lines of code people vs LLMs could output in any given time, or the lines of code in an LLM assisted application - using LOC as a metric for productivity.
But would an LLM ever suggest using a utility or library, or re-architecture an application, over writing their own code?
I've got a fairly simple application, renders a table (and in future some charts) with metrics. At the moment all that is done "by hand", last features were stuff like filtering and sorting the data. But that kind of thing can also be done by a "data table" library. Or the whole application can be thrown out in favor of a workbook (one of those data analysis tools, I'm not at home in that are at all). That'd save hundreds of lines of code + maintenance burden.
1826 - The Heliograph - 8+ hours
1839 - The Daguerreotype - 15–30 Mins
1841 - The Calotype - 1–2 Mins
1851 - Wet Plate Collodion - 2–20 Secs
1871 - The Dry Plate - < 1 Second.
So it took 45 years to perfect the process so you could take an instant image. Yet we complain after 4 years of LLMs that they're not good enough.
> Yes, I blame AI for this.
> I am currently writing much more, and more complicated software than ever, yet I feel I am not growing as an engineer at all. [...] (emphasis added by me)
AI is a force multiplier for accidental complexity in the Brooks sense. (https://en.wikipedia.org/wiki/No_Silver_Bullet)
Frustrated rants about deliverables aside, I don't think that's the case.
Trying to find the right level is the art. Once you learn the tools of the trade and can do abstraction, it's natural to want to abstract everything. Most programmers go through such a phase. But sometimes things really are distinct and trying to find an abstraction that does both will never be satisfactory.
When building a house there are generally a few distinct trades that do the work: bricklayers, joiners, plumbers, electricians etc. You could try to abstract them all: it's all just joining stuff together isn't it? But something would be lost. The dangers of working with electricity are completely different to working with bricks. On the other hand, if people were too specialised it wouldn't work either. You wouldn't expect a whole gang of electricians, one who can only do lighting, one who can only do sockets, one who can only do wiring etc. After centuries of experience we've found a few trades that work well together.
So, yes, it's all just abstraction, but you can go too far.
That has actually been a major problem for me in the past where my core idea is too simple, and I don't give "the muse" enough time to visit because it doesn't take me long enough to build it. Anytime I have given the muse time to visit, they always have.
Discovering the right problem to solve is not necessarily coupled to being "hands on" with the "materials you're shaping".
Code that fails to execute or compile is the default expectation for me. That's why we feed compile and runtime errors back into the model after it proposes something each time.
I'd much rather the code sometimes not work than to get stuck in infinite tool calling loops.
So whether you’re writing the spec code out by hand or ask an LLM to do it is besides the point if the code is considered a means to an end, which is what the post above yours was getting at.
Probably not vibe coding, but most certainly with some AI automation
Skipping over that step results in a world of knock offs and product failures.
People buy Zara or H&M because they can offload the work of verifying quality to the brand.
This was a major hurdle that mass manufacturing had to overcome to achieve dominance.
I'm starting to wonder if we lose something in all this convenience. Perhaps my life is better because I cook my own food, wash my own dishes, chop my own firewood, drive my own car, write my own software. Outwardly the results look better the more I outsource but inwardly I'm not so sure.
On the subject of furnishing your house the IKEA effect seems to confirm this.
> For me, lots of programming is the same. I just want to assemble the pieces
How did those pieces came to be? By someone assembling other pieces or by someone crafting them together out of nowhere because nobody else had written them by the time?
Of course you reuse other parts and abstractions to do whatever things that you're not working on but each time you do something that hasn't been done before you can't but engage the creative process, even if you're sitting on top of 50 years worth of abstractions.
In other words, what a programmer essentially has is a playfield. And whether the playfield is a stack of transistors or coding agents, when you program you create something new even if it's defined and built in terms of the playfield.
Software engineers are lazy. The good ones are, anyway.
LLMs are extremely dangerous for us because it can easily become a "be lazy button". Press it whenever you want and get that dopamine hit -- you don't even have to dive into the weeds and get dirty!
There's a fine line between "smart autocomplete" and "be lazy button". Use it to generate a boilerplate class, sure. But save some tokens and fill that class in yourself. Especially if you don't want to (at your own discretion; deadlines are a thing). But get back in those weeds, get dirty, remember the pain.
We need to constantly remind ourselves of what we are doing and why we are doing it. Failing that, we forget the how, and eventually even the why. We become the reverse centaur.
And I don't think LLMs are the next layer of abstraction -- if anything, they're preventing it. But I think LLMs can help build that next layer... it just won't look anything like the weekly "here's the greatest `.claude/.skills/AGENTS.md` setup".
If you have to write a ton of boilerplate code, then abstract away the boilerplate in code (nondeterminism is so 2025). And then reuse that abstraction. Make it robust and thoroughly tested. Put it on github. Let others join in on the fun. Iterate on it. Improve it. Maybe it'll become part of the layers of abstraction for the next generation.
I can do some crud apps where it's just data input to data store to output with little shaping needed. Or I can do apps where there's lots of filters, actions and logic to happen based on what's inputted that require some thought to ensure actually solve the problem it's proposed for.
"Shaping the clay" isn't about the clay, it's about the shaping. If you have to make a ball of clay and also have to make a bridge of Lego a 175kg human can stand on, you'll learn more about Lego and building it than you will about clay.
Get someone to give you a Lego instruction sheet and you'll learn far less, because you're not shaping anymore.
How are you doing this via your phone?
deciding whether to use that to work on multiple features on the same code base, or the same feature in multiple variations is hard
deciding whether to work on a separate project entirely while all of this is happening is hard and mentally taxing
planning all of this up for a few hours and watching it go all at once autonomously is satisfying!
claude via browser and claude mobile apps function this way
but alongside that, people do make tunnels to their personal computer and setup ways to be notified on their phone, or to get the agent unstuck when it asks for a permission, from their phone
If you just chuck ideas at the external coding team/tool you often get rubbish back.
If you're good at managing the requirements and defining things well you can achieve very good things with much less cost.
Obviously I am not comparing his final product with my code, I am simply pointing out how this metaphor is flawed. Having "workers" shape the material according to your plans does not reduce your agency.
I can imagine many positions work out this way in startups
it's important to think hard sometimes, even if it means taking time off to do the thinking - you can do it without the socioeconomic pressure of a work environment
An even better analogy is the slot machine. Once you've "won" one time it's hard to break the cycle. There's so little friction to just having another spin. Everyone needs to go and see the depressed people at slot machines at least once to understand where this ends.
With AI the pros outweigh the cons at least at the moment with what we collectively have figured out so far. But with that everyday I wonder if it's possible now to be more ambitious than ever and take on much bigger problem with the pretend smart assistant.
its obviously not wrong to fly over the desert in a helicopter. its a means to an end and can be completely preferable. I mean myself I'd prefer to be in a passenger jet even higher above it, at a further remove personally. But I wouldn't think that doing so makes me someone who knows the desert the same way as someone who has crossed it on foot. It is okay to prefer and utilize the power of "the next abstraction", but I think its rather pig headed to deny that nothing of value is lost to people who are mourning the passing of what they gained from intimate contact with the territory. and no it's not just about the literal typing. the advent of LLMs is not the 'end of typing', that is more reductionist failure to see the point.
The humane and the machinic need to meet halfway - any computing endeavor involves not only specifying something clearly enough for a computer to execute it, but also communicating to humans how to benefit from the process thus specified. And that's the proper domain not only of software engineering, but the set of related disciplines (such as the various non-coding roles you'd have in a project team - if you have any luck, that is).
But considering the incentive misalignments which easily come to dominate in this space even when multiple supposedly conscious humans are ostensibly keeping their eyes on the ball, no matter how good the language machines get at doing the job of any of those roles, I will still intuitively mistrust them exactly as I mistrust any human or organization with responsibly wielding the kind of pre-LLM power required for coordinating humans well enough to produce industrial-scale LLMs in the first place.
What's said upthread about the wordbox continually trying to revert you to the mean as you're trying to prod it with the cowtool of English into outputting something novel, rings very true to me. It's not an LLM-specific selection pressure, but one that LLMs are very likely to have 10x-1000xed as the culmination of a multigenerational gambit of sorts; one whose outset I'd place with the ever-improving immersive simulations that got the GPU supply chain going.
I'm not immune to that and I catch myself sometimes being more reluctant to adapt. I'm well aware and I actively try to force myself to adapt. Because the alternative is becoming stuck in my ways and increasingly less relevant. There are a lot of much younger people around me that still have most of their careers ahead of them. They can try to whine about AI all they want for the next four decades or so but I don't think it will help them. Or they can try to deal with the fact that these tools are here now and that they need to learn to adapt to them whether they like it or not. And we are probably going to see quite some progress on the tool front. It's only been 3 years since ChatGPT had its public launch.
To address the core issue here. You can use AI or let AI use you. The difference here is about who is in control and who is setting the goals. The traditional software development team is essentially managers prompting programmers to do stuff. And now we have programmers prompting AIs to do that stuff. If you are just a middle man relaying prompts from managers to the AI, you are not adding a lot of value. That's frustrating. It should be because it means apparently you are very replaceable.
But you can turn that around. What makes that manager the best person to be prompting you? What's stopping them from skipping that entirely? Because that's your added value. Whatever you are good at and they are not is what you should be doing most of your time. The AI tools are just a means to an end to free up more time for whatever that is. Adapting means figuring that out for yourself and figuring out things that you enjoy doing that are still valuable to do.
There's plenty of work to be done. And AI tools won't lift a finger to do it until somebody starts telling them what needs doing. I see a lot of work around me that isn't getting done. A lot of people are blind to those opportunities. Hint: most of that stuff still looks like hard work. If some jerk can one shot prompt it, it isn't all that valuable and not worth your time.
Hard work usually involves thinking hard, skilling up, and figuring things out. The type of stuff the author is complaining he misses doing.
Reverse engineering is imo the best way of getting the experience of pushing your thinking in a controlled way, at least if you have the kind of personality where you are stubborn in wanting to solve the problem.
Go crack an old game or something!
I do miss hard thinking, I haven't really found a good alternative in the meantime. I notice I get joy out of helping my kids with their, rather basic, math homework, so the part of me that likes to think and solve problems creatively is still there. But it's hard to nourish in today's world I guess, at least when you're also a 'builder' and care about efficiency and effectiveness.
I'll spend years working on a from scratch OS kernel or a vulkan graphics engine or whatever other ridiculous project, which never sees the light of day, because I just enjoy the thinking / hard work. Solving hard problems is my entertainment and my hobby. It's cool to eventually see results in those projects, but that's not really the point. The point is to solve hard problems. I've spent decades on personal projects that nobody else will ever see.
So I guess that explains why I see all the ai coding stuff and pretty much just ignore it. I'll use ai now as an advanced form of google, and also as a last ditch effort to get some direction on bugs I truly can't figure out, but otherwise I just completely ignore it. But I guess there's other people, the builders, where ai is a miraculous thing and they're going to crazy lengths to adopt it in every workflow and have it do as much as possible. Those 'builder' types of people are just completely different from me.
This is why I am so deeply opposed to using AI for problem solving I suppose: it just doesn’t play nice with this process.
I realized that when a friend of mine gave me Factorio as a gift last Christmas, and I found myself facing the exact same resistance I'm facing while thinking about working on my personal projects. To be more specific, it's a fear and urge of closing the game and leaving it "for later" the moment I discover that I've either done something wrong or that new requirements have been added that will force me to change the way my factories connect with each other (or even their placement). Example: Tutorial 4 has the players introduced to research and labs, and this feeling appears when I realize that green science requires me to introduce all sorts of spaghetti just to create the mats needed for green science!
So I've done what any AI user would do and opted to use chatGPT to push through the parts where things are either overwhelming, uncertain, too open-ended, or everything in between. The result works, because the LLM has been trained to Factorio guides, and goes as far as suggesting layouts to save myself some headache!
Awesome, no? Except all I've done is outsource the decision of how to go about "the thing" to someone else. And while true, I could've done this even before LLM's by simply watching a youtube video guide, the LLM help doesn't stop there: It can alleviate my indecisiveness and frustration with dealing with open-ended problems for personal projects, can recommend me project structure, can generate a bullet pointed lists to pretend that I work for a company where someone else creates the spec and I just follow it step by step like a good junior software engineer would do.
And yet all I did just postponed the inevitable exercise of a very useful mental habit: To navigate uncertainty, pause and reflect, plan, evaluate a trade-off or 2 here and there. And while there are other places and situations where I can exercise that behavior, the fact remains that my specific use of LLM removed that weight off my shoulders. I became objectively someone who builds his project ideas and makes progress in his Factorio playthrough, but the trade-off is I remain the same person who will duck and run the moment resistance happens, and succumb to the urge of either pushing "the thing" for tomorrow or ask chatGPT for help.
I cannot imagine how someone would claim that removing an exercise from my daily gym visit will not result in weaker muscles. There are so many hidden assumptions in such statements, and an excessive focus of results in "the new era where you should start now or be left behind" where nobody's thinking how this affects the person and how they ultimately function in their daily lives across multiple contexts. It's all about output, output, output.
How far are we from the day where people will say "well, you certainly don't need to plan a project, a factory layout, or even decide, just have chatGPT summarize the trade-offs, read the bullet points, and choose". We're off-loading portion of the research AND portion of the execution, thinking we'll surely be activating the neurosynapses in our brains that retains habits, just like someone who lifts 50% lighter weights at the gym will expect to maintain muscle mass or burn fat.
Manually coding engaged my brain much more and somehow was less exhausting, kinda feels like getting out of bed and doing something vs lazing around and ending up feel more tired despite having to do less.
Never have I ever used an LLM.
Not being hands-on, and more important not LISTENING to the hands-on people and learning from them, is a massive issue in my surroundings.
So thinking hard on something is cool. But making it real is a whole different story.
Note: as Steve used to say, "real artists ship".
I actually believe that there are much better ways to incorporate AI into software development than any of the mechanisms we’ve seen so far. For instance, it would make a lot more sense that you actually write the software manually and get the usual autocomplete suggestions, along with some on the fly reviews, an extension proposals, such as writing the body of a function that you’re calling from the core function you’re writing now.
So, people fake things to get a fake life. Reminds me a Russian joke about factory workers. "They pretend to pay, and we pretend to work".
So you can just, like, tweak it when it's working against your intent in either direction?
Okay, for you that is new - post-LLM.
For me, pre-LLM I thought about all those things as well as the code itself.
IOW, I thought about even more things. Now you (if I understand your claim correctly) think only about those higher level things, unencumbered by stuff like implementation misalignments, etc. By definition alone, you are thinking less hard.
------------------------
[1] Many times the thinking about code itself acted as a feedback mechanism for all those things. If thinking about the code itself never acted as a feedback mechanism to your higher thought processes then ... well, maybe you weren't doing it the way I was.
I don't allow my agent to write any code. I ask it for guidance on algorithms, and to supply the domain knowledge that I might be missing. When using it for game dev for example, I ask it to explain in general terms how to apply noise algorithms for procedural generation, how to do UV mapping etc, but the actual implementation in my language of choice is all by hand.
Honestly, I think this is a sweet spot. The amount of time I save getting explanations of concepts that would otherwise get a bit of digging to get is huge, but I'm still entirely in control of my codebase.
If you are outsourcing to an LLM in this case YOU are still in charge of the creative thought. You can just judge the output and tune the prompts or go deep in more technical details and tradeoffs. You are "just" not writing the actual code anymore, because another layer of abstraction has been added.
Isn't that a good thing? If you're stuck on the same problem forever, then you're not going to get past it and never move on to the next thing... /shrug
I'm replaceable after all. If there is someone who is better and more effective at solving problems in some objectively good way - they should have my job. The only reason I still have it is because it seems this is hard to find. Employers are stuck with people who solve problems in the way they like for varying personal reasons and not the objectively best way of solving problems.
The hard part in keeping employees happy is that you can't just throw more money at them to make them effective. Keeping them stimulated is the difficult part. Some times you must accept that you must perhaps solve a problem that isn't the most critical one to address, or perhaps a bad call business wise, to keep employees happy, or keep them at all. I think a lot of the "Big rewrites" are in this category, for example. Not really a good idea compared to maintenance/improvement, but if the alternative is maintaining the old one _and_ lose the staff who could do that?
For now. Go back a year and take a look how the AI/LLM coding tools looked and worked back then.
That description is NOT coding, coding is a subset of that.
Coding comes once you know what you need to build, coding is the process of you expressing that in a programming language and as you do so you apply all your knowledge, experience and crucially your taste, to arrive at an implementation which does what's required (functionally and non-functionally) AND is open to the possibility of change in future.
Someone else here wrote a great comment about this the other day and it was along the lines of if you take that week of work described in the GP's comment, and on the friday afternoon you delete all the code checked in. Coding is the part to recreate the check in, which would take a lot less than a week!
All the other time was spent turning you into the developer who could understand why to write that code in the first place.
These tools do not allow you to skip the process of creation. They allow you to skip aspects of coding - if you choose to, they can also elide your tastes but that's not a requirement of using them, they do respond well to examples of code and other directions to guide them in your tastes. The functional and non-functional parts they're pretty good at without much steering now but i always steer for my tastes because, e.g. opus 4.5 defaults to a more verbose style than i care for.
Hard Things in Computer Science, and AI Aren't Fixing Them
Instead of pouring all of your efforts into making one single static object with no moving parts, you can simply specify the individual parts, have the machine make them for you, and pour your heart and soul into making a machine that is composed of thousands of parts, that you could never hope to make if you had to craft each one by hand from clay.
We used to have a way to do this before LLMs, of course: we had companies that employed many people, so that the top level of the company could simply specify what they wanted, and the lower levels only had to focus on making individual parts.
Even the person making an object from clay is (probably) not refining his own clay or making his own oven.
Maybe, but beware assuming you could do something you haven't actually tried to do.
Everything is easy in the abstract.
Or, it's like trying to make a MacBook Pro by buying electronics boards from AliExpress and wiring them together.
The last time I had to be a Thinker was because I was in Builder mode. I’ve been trying to build an IoT product but I’ve been wayyyy over my head because I knew maybe 5% of what I needed to be successful. So I would get stuck many, many times, for days or weeks at a time.
I will say though that AI has made the difference in the last few times I got stuck. But I do get more enjoyment out of Building than Thinking, so I embrace it.
But Pulp Fiction would not have been a masterpiece if Tarantino just typed “Write a gangster movie.” into a prompt field.
while Linus has his own efforts multiplied as well.
I strongly experience that coding agents are helping me think about stuff I wasn't able to think through before.
I very much have both of these builder and thinker personas inside me, and I just am not getting this experience with "lack of thinking" that I'm seeing so many other people write about. I have it exactly the other way around, even if I'm a similar arch type of person. I'm spending less time building and more time thinking than ever.
It seems to assume that vibe coding or like whatever you call the Gas Town model of programming is the only option, but you don't have to do that. You don't have to specify upfront what you want and then never change or develop that as you go through the process of building it, and you don't have to accept whatever the AI gives you on the other end as final.
You can explore the affordances of the technologies you're using, modify your design and vision for what you're building as you go; if anything, I've found AI coding mix far easier to change and evolve my direction because it can update all the various parts of the code that need to be updated when I want to change direction as well as keeping the tests and specification and documentation in sync, easily and quickly.
You also don't need to take the final product as a given, a "simulacrum delivered from a vending machine": build, and then once you've gotten something working, look at it and decide that it's not really what you want, and then continue to iterate and change and develop it. Again, with AI coding, I've found this easier than ever because it's easier to iterate on things. The process is a bit faster for not having to move the text around and looking up API documentation myself, even though I'm directly dictating the architecture and organization and algorithms and even where code should go most of the time.
And with the method I'm describing, where you're in the code just as much as the AI is, just using it to do the text/API/code munging, you can even let the affordances of not just the technologies, but the source code and programming language itself effect how you do this: if you care about the code quality and clarity and organization of the code that the AI is generating, you'll see when it's trying to brute force its way past technical limitations and instead redirect it to follow the grain. It just becomes easier and more fluid to do that.
If anything, AI coding in general makes it easier to have a conversation with the machine and its affordances and your design vision and so on, then before because it becomes easier to update everything and move everything around as your ideas change.
And nothing about it means that you need to be ignorant of what's going on; ostensibly you're reviewing literally every line of code it creates and deciding what libraries and languages as well as the architecture, organization and algorithms it's using. You are aren't you? So you should know everything you need to know. In fact, I've learned several libraries and a language just from watching it work, enough that I can work with them without looking anything up, even new syntax and constructs that would have been very unfamiliar prior on my manual coding days.
But these arguments and the OP's article do reinforce that AI rots brains. Even my sparing use of googles gemini and my interaction with the bots here have really dinged my ability to do simple math.
But since the AI is generating a lot of code, it is challenging me. It also allows me to tackle problems in unfamiliar areas. I need to properly understand the solutions, which again is challenging. I know that if I don't understand exactly what the code is doing and have confidence in the design and reliability, it will come back and bite me when I release it into the wild. A lesson learnt the hard way during many decades of programming.
It's amazing what one competent developer can do, and it's amazing how little a hundred devs end up actually doing when weighed down by beaurocracy. And lets not pretend even half of them qualify as competent, not to mention they probably don't care either. They get to work and have a 45 min coffee break, move some stuff around in the Kanban board, have another coffee break, then lunch, then foosball etc. Ad when they actually write some code it's ass.
And sure, for those guys maybe LLMs represent a huge productivity boost. For me it's usually faster to do the work myself than to coax the bot into creating something acceptable.
The LLM adding a bunch of extra formatting to add emphasis and structure to what might have originally been a bit of a ramble, but obviously human written. The comments absolutely lambasted this OP for being a hypocrite complaining about their team using AI, but then seeing little problem with posting what is obviously an AI generated question because the OP didn't deem their English skills good enough to ask the question directly.
I'm not going to pass judgement on this scenario, but I did think the entire encounter was a "fun" anecdote in addition to your comments.
Edit: wrods
For me it's always been the effort that's fun, and I increasingly miss that. Today it feels like I'm playing the same video game I used to enjoy with all the cheats on, or going back to an early level after maxing out my character. In some ways the game play is the same, same enemies, same map, etc, but the action itself misses the depth that comes from the effort of playing without cheats or with a weaker character and completing the stage.
What I miss personally is coming up with something in my head and having to build it with my own fingers with effort. There's something rewarding about that which you don't get from just typing "I want x".
I think this craving for effort is a very human thing to be honest... It's why we bake bread at home instead just buying it from a locally bakery that realistically will be twice as good. The enjoyment comes from the effort. I personally like building furniture and although my furniture sucks compared to what you might be able buy at store, it's so damn rewarding to spend days working on something then having a real physical thing that you can use that you build from hand.
I've never thought of myself as someone who likes the challenge of coding. I just like building things. And I think I like building things because building things is hard. Or at least it was.
As someone that started with Machine Code, I'm grateful for compiled -even interpreted- languages. I can’t imagine doing the kind of work that I do, nowadays, in Machine Code.
I’m finding it quite interesting, using LLM-assisted development. I still need to keep an eye on things (for example, the LLM tends to suggest crazy complex solutions, like writing an entire control from scratch, when a simple subclass, and five lines of code, will work much better), but it’s actually been a great boon.
I find that I learn a lot, using an LLM, and I love to learn.
Also the code is not a means to an end. It’s going to be run somewhere doing stuff someone wants to do reliably and precisely. The overall goal was ever to invest some programmer time and salary in order to free more time for others. Not for everyone to start babysitting stuff.
I think there needs to be a sea change in the current LLM tech to make that no longer the case - either massively increased context sizes, so they can contain near a career worth of learning (without the tendency to start ignoring that context, as the larger end of the current still-way-too-small-for-this context windows available today), or even allow continuous training passes to allow direct integration of these "learnings" into the weights themselves - which might be theoretically possible today, but is many orders of magnitude higher in compute requirements than available today even if you ignore cost.
So I’m tempted to say that this is just a part of the economic system in general, and isn’t specifically linked to AI. Unless you’re lucky enough to grab a job that requires deep intellectual work, your day job will probably not challenge your mental abilities as much as a difficult college course does.
Sad but true, but unfortunately I don’t think any companies are paying people to think deeply about metaphysics (my personal favorite “thinking hard subject” from college.)
The biggest lesson I am learning recently is that technologists will bend over backwards to gaslight the public to excuse their own myopia.
You are going to tell me that the vibe coders care and read the code they merge with the same attention to detail and care that Linus has? Come on...
That's the key for me. People are churning out "full features" or even apps claiming they are dealing with a new abstraction level, but they don't give a fuck about the quality of that shit. They don't care if it breaks in 3 weeks/months/years or if that code's even needed or not.
Someone will surely come say "I read all the code I generate" and then I'll say either you're not getting these BS productivity boost people claim or you're lying.
I've seen people pushing out 40k lines of code in a single PR and have the audacity to tell me they've reviewed the code. It's preposterous. People skim over it and YOLO merge.
Or if you do review everything, then it's not gonna be much faster than writing it yourself unless it's extremely simple CRUD stuff that's been done a billion times over. If you're only using AI for these tasks maybe you're a bit more efficient, but nothing close to the claims I keep reading.
I wish people cared about what code they wrote/merged like Linus does, because we'd have a hell of a lot less issues.
So, we have an inflation of worthless stuff being done.
If you really care about using the hardware effectively, optimizing the code is so much more than what you describe.
My mindset this year: I am an engineering manager to a team of developers
If the pace of AI improvement continues, my mindset next year will need to be: I am CEO and CTO.
I never enjoyed the IC -> EM transition in the workplace because of all the tedious political issues, people management issues and repetitive admin. I actually went back to being an IC because of this.
However, with a team of AI agents, there's less BS, and less holding me back. So I'm seeing the positives - I can achieve vastly more, and I can set the engineering standards, improve quality (by training and tuning the AI) and get plenty of satisfaction from "The Builder" role, as defined in the article.
Likewise I'm sure I would hate the CEO/CTO role in real life. However, I am adapting my mindset to the 2030s reality, and imagining being a CEO/CTO to an infinitely scalable team of Agentic EMs who can deliver the work of hundreds of real people, in any direction I choose.
How much space is there in the marketplace if all HN readers become CEOs and try to launch their own products and services? Who knows... but I do know that this is the option available to me, and it's probably wise to get ahead of it.
We’ve created formal notation to shorten writing. And computation is formal notation that is actually useful. Why write pages of specs when I could write a few lines of code?
Sometimes you want a utilitarian teapot to reliably pour a cup of tea.
The materials and rough process for each can be very similar. One takes a master craftsman and a lot of time to make and costs a lot of money. The other can be made on a production line and the cost is tiny.
Both have are desirable, for different people, for different purposes.
With software, it's similar. A true master knows when to get it done quick and dirty and when to take the time to ponder and think.
Your contrast is an either or, that - in the real world - does not exist.
Take content written by AI, prompted by a human. A lot of it is slop and crap. And there will be more slop and crap with AI than before. But that was the case, when the medium changed from hand writen to printed books. And when paper and printing became cheap, we had slop like those 10 Cent Western or Romance novellas.
We also still had Goethe, still had Kleist, still had Grass (sorry, very German centric here).
We also have Inception vs. the latest sequel of any Marvel franchise.
I have seen AI writen, but human prompted short stories, that made people well up and find ideas presented in a light not seen before. And I have seen AI generated stories that one wants to purge from my brain.
It isn't the tool - it is the one yielding it.
Question: Did photoshop kill photography? Because honestly, this AI discussion to me sounds very much like the discussion back then.
I use LLMs a lot. They're ridiculously cool and useful.
But I don't think it's fair to categorize anybody as "egotistical". I enjoy programming for the fun puzzley bits. The big puzzles, and even often the small tedious puzzles. I like wiring all the chunks up together. I like thinking about the best way to expose a component's API with the perfect generic types. That's the part I like.
I don't always like "delivering value" because usually that value is "achieve 1.5% higher SMM (silly marketing metric) by the end of the quarter, because the private equity firm that owns our company is selling it next year and they want to get a good return".
I think the biggest beef I have with Engineers is that for decades they more or less reduced the value of other lumps of clay and now want to throw up arms when its theirs.
If you pardon the analogy, watch how Japanese make a utilitarian teapot which reliably pours a cup of tea.
It's more complicated and skill-intensive than it looks.
In both realms, making an artistic vase can be simpler than a simple utilitarian tool.
AI is good at making (poor quality, arguably) artistic vases via its stochastic output, not highly refined, reliable tools. Tolerances on these are tighter.
As someone that’s a bit of a fence-sitter on the matter of AI, I feel that using it in the way that OP did is one of the less harmful or intrusive uses.
Because everyone under him knows that a mistake big enough is a quick way to unemployment or legal actions. So the whole team is pretty much aligned. A developer using an LLM may as well try to herd cats.
I think there are just a class of people know that think that you cannot get 'macbook' quality with a LLM. I don't know why I try to convince them, it's not in my benefit.
There is a difference between cooking and putting a ready meal into the microwave.
Both satisfy your hunger but only one can give some kind of pride.
"Write a gangster movie that I like", instead of "...a movie this other guy likes".
But because this is not the case, we appreciate Tarantino more than we appreciate gangster movies. It is about the process.
On top of that, the business settings/environments will always lean towards the option that provides the largest productivity gains, without awareness of the long term consequences for the worker. At that environment, not using it is not an option, unless you want to be unemployed.
Where does that leave us? Are we supposed to find the figurative "gym for problem solving" the same way office workers workout after work? Because that's the only solution I can think of: Trading off my output for problem solving outside of work settings, so that I can improve my output with the tool at work.
With LLMs and engineers often being forced by management to use them, everyone is pushed to become like the second group, even though it goes against their nature. The former group see the part as a means, whereas the latter view it as the end.
Some people love the craft itself and that is either taken away or hollowed out.
As a beginner I often thought about a problem for days before finding a solution, but this happened less and less as I improved
I got better at exploiting the things I knew, to the point where I could be pretty confident that if I couldn't solve a problem in a few hours it was because I was missing some important piece of theory
I think spending days "sitting with" a problem just points at your own weakness in solving some class of problems.
If you are making no articulable progress whatsoever, there is a pathology in your process.
Even when working on my thesis, where I would often "get stuck" because the problem was far beyond what I could solve in one sitting, I was still making progress in some direction every time.
It killed an aspect of it. The film processing in the darkroom. Even before digital cameras were ubiquitous it was standard to get a scan before doing any processing digitally. Chemical processing was reduced the minimum necessary.
I like being useful, and I'm not yet sure how much of what I'm creating with AI is _me_, and how much it is _it_. It's hard to derive as much purpose/meaning from it compared to the previous reality where it was _all me_.
If I compare it to a real world problem; e.g. when I unplug the charging cable from my laptop at my home desk, the charging cable slides off the table. I could order a solution online that fixes the problem and be done with it, but I could also think how _I_ can solve the problem with what I already have in my spare parts box. Trying out different solutions makes me think and I'm way more happy with the end result. Every time I unplug the cable now and it stays in place it reminds me of _my_ labour and creativity (and also the cable not sliding down the table -- but that's besides the point).
Yes, some things are better when manufactured in highly automated ways (like computer chips), but their design has been thoroughly tested and before shipping the chips themselves go through lots of checks to make sure they are correct. LLM code is almost never treated that way today.
That's it, yeah. It sucks but it's part of the job. It makes you a better engineer.
You're absolutely right that this isn't sustainable however. In one of my earlier jobs - specifically, the one that trained me up to become the senior engineer I am now - we had "FedEx Fridays" (same day delivery, get it?). In a word, you have a single work day to work on something non-work related, with one condition: you had to have a deliverable by the end of the day. I cannot overstate how useful having something like this in place in the place of business is for junior devs. The trick is convincing tech businesses that this kind of "training" is a legitimate overhead - the kinds of businesses that are run by engineers get this intuitively. The kind that have a non-technical C-suite less so.
I think this makes a perfect counter-example. Because this structure is an important reason for YC to exist and what the HN crowd often rallies against.
Such large companies - generally - don't make good products. Large companies rarely make good products in this way. Most, today, just buy companies that built something in the GP's cited vein: a creative process, with pivots, learnings, more pivots, failures or - when successful - most often successful in an entirely different form or area than originally envisioned. Even the large tech monopolies of today originated like that. Zuckerberg never envisioned VR worlds, photo-sharing apps, or chat apps, when he started the campus-fotobook-website. Bezos did not have some 5d-chess blueprint that included the largest internet-infrastructure-for-hire when he started selling books online.
If anything, this only strengthens the point you are arguing against: a business that operates by a "head" "specifying what they want" and having "something" figure out how to build the parts, is historically a very bad and inefficient way to build things.
More to the piece itself, I know some crusty old embedded engineers who feel the same way about compilers as this guy does about AI, it doesn’t invalidate his point but it’s food for thought
But to be honest, those hours spent structuring thoughts are so important to making things work. Or you might as well get out of the way and let AI do everything, why even pretend to work when all we're going to do is just copy and paste things from AI outputs?
Yes, but you’re not taking into account what actually caused this evolution. At first glance, it looks like exponential growth, but then we see OpenAI (as one example) with trillions in obligations compared to 12–13 billion in annual revenue. Meanwhile, tool prices keep rising, hardware demand is surging (RAM shortages, GPUs), and yet new and interesting models continue to appear. I’ve been experimenting with Claude over the past few days myself. Still, at some point, something is bound to backfire.
The AI "bubble" is real, you don’t need a masters degree in economics to recognize it. But with mounting economic pressures worldwide and escalating geopolitical tension we may end up stuck with nothing more than those amusing Will Smith eating pasta videos for a while.
But understanding your weaknesses and working on them is huge, and I think most people just don't try to do it.
Being stuck for days is something to be overcome.
The next step would be being slow because you are trying out many different ideas and have no intuition for what the right one is.
I intentionally do not use AI though.
But I sympathize with the author. I enjoy thinking deeply about problems which is why I studied compsci and later philosophy, and ended up in the engineering field. I’m an EM now so AI is less of an “immediate threat” to my thinking habits than the role change was.
That said, I did recently pick up more philosophy reading again just for the purpose of challenges my brain.
Doesn’t that prove the point? You could do that right now, and it would be absolute trash. Just like how right now we are nowhere close to being able to make great software with a single prompt.
I’ve been vibecoding a side project and it has been three months of ideating, iterating, refining and testing. It would have taken me immeasurably longer without these tools, but the end result is still 100% my vision, and it has been a tremendous amount of work.
The tools change, but the spirit only grows.
Contemplating the old RTFM, I started a new personal project called WTFM and spends time writing instead of coding. There is no agenda and product goals.
There are so many interesting things in human generated computer code and documentation. Well crafted thoughts are precious.
If it takes you more than a few seconds or so to understand code an agent generated you’re going to make mistakes. You should know exactly what it’s going to produce before it produces it.
For those who have found a "flow state" with LLM agents, what's that like?
So that Excel spreadsheet that manages the entire sales funnel?
A couple of thoughts.
First, I think the hardness of the problems most of us solve is overrated. There is a lot of friction, tuning things, configuring things right, reading logs, etc. But are the problems most of us are solving really that hard? I don't think so, except for those few doing groundbreaking work or sending rockets to space.
Second, even thinking about easier problems is good training for the mind. There's that analogy that the brain is a "muscle", and I think it's accurate. If we always take the easy way out for the easier problems, we don't exercise our brains, and then when harder problems come up what will we do?
(And please, no replies of the kind "when portable calculators were invented...").
Some commentators dismissed this trend towards photography as simply a beneficial weeding out of second-raters. For example, the writer Louis Figuier commented that photography did art a service by putting mediocre artists out of business, for their only goal was exact imitation. Similarly, Baudelaire described photography as the “refuge of failed painters with too little talent”. In his view, art was derived from imagination, judgment and feeling but photography was mere reproduction which cheapened the products of the beautiful [23].
https://www.artinsociety.com/pt-1-initial-impacts.html#:~:te...
This essay captures that.
Even the pure artist, for whom utility may not seem to matter, manufactures meaning not just from creative exploration directly, but also from the difficulty (which can take many forms) involved in doing something genuinely new, and what they learn from that.
What happens to that when we even have “new” on tap.
In some ways, it's magical. e.g. I whipped up a web based tool for analyzing performance statistics of a blockchain. Claude was able to do everything from building the gui, optimizing the queries, adding new indices to the database etc. I broke it down into small prompts so that I kept it on track and it didn't veer off course. 90% of this I could have done myself but Claude took hours where it would have taken me days or even weeks.
Then yesterday I wanted to do a quick audit of our infra using Ansible. I first thought: let's try Claude again. I gave it lots of hints on where our inventory is, which ports matter etc but it still was grinding away for several minutes. I eventually Ctrl-C'ed and used a couple one liners that I wrote myself in a few minutes. In other words, I was faster that the machine in this case.
After the above, it makes sense to me that people may have conflicting feelings about productivity. e.g. sometimes it's amazing, sometimes it does the wrong thing.
I’m no less proud of what I built in the last three weeks using three terminal sessions - one with codex, one with Claude, and one testing everything from carefully designed specs - than I was when I first booted a computer, did “call -151” to get to the assembly language prompt on my Apple //e in 1986.
The goal then was to see my ideas come to life. The goal now is to keep my customers happy, get projects done on time, on budget and meets requirements and continue to have my employer put cash in my account twice a month - and formerly put AMZN stock in my brokerage account at vesting.
Artisans in Japan might go to incredible lengths to create utilitarian teapots. Artisans who graduated last week from a 4-week pottery workshop will produce a different kind quality, albeit artisan. $5.00 teapots from an East Asian mass production factory will be very different than high quality mass-produced upmarket teapots at a higher price. I have things in my house that fall into each of those categories (not all teapots, but different kinds of wares).
Sometimes commercial manufacturing produces worse tolerances than hand-crafting. Sometimes, commercial manufacturing is the only way to get humanly unachievable tolerances.
You can't simplify it into "always" and "never" absolutes. Artisan is not always nicer than commercial. Commercial is not always cheaper than artisan. _____ is not always _____ than ____.
If we bring it back to AI, I've seen it produce crap, and I've also seen it produce code that honestly impressed me (my opinion is based on 24 years of coding and engineering management experience). I am reluctant to make a call where it falls on that axis that we've sketched out in this message thread.
One must be conversant in abstractions that are themselves ephemeral and half hallucinated. It's a question of what to cling to, what to elevate beyond possible hallucinated rubbish. At some level it's a much faster version of the meastspace process and it can be extermely emotionally uncomfortable and anarchic to many.
I used to think about my projects when in bed, now i listen to podcasts or watch youtube videos before sleeping.
I think it has a much bigger impact than using our "programming calculator" as an assistant.
Yeah, you're certainly not the only one. For me the implementation part has always been a breeze compared to all the "communication overhead" so to speak. And in any mature system it easily takes 90% of all time or more.
Just let it try and solve an issue with your advanced SQLAlchemy query and see it burn. xD
"Most developers don't know the assembly code of what they're creating. When you skip assembly you trade the very thing you could have learned to fully understand the application you were trying to make. The end result is a sad simulacrum of the memory efficiency you could have had."
This level of purity-testing is shallow and boring.
Its bleak out there.
Now, the only reason I code and have been since the week I graduated from college was to support my insatiable addictions to food and shelter.
While I like seeing my ideas come to fruition, over the last decade my ideas were a lot larger than I could reasonably do over 40 hours without having other people working on projects I lead. Until the last year and a half where I could do it myself using LLMs.
Seeing my carefully designed spec that includes all of the cloud architecture get done in a couple of days - with my hands on the wheel - that would have taken at least a week with me doing some work while juggling dealing with a couple of other people - is life changing
When you do this with an outsourced team, it can happen at most once per sprint, and with significant pushback, because there's a desire for them to get paid for their deliverable even if it's not what you wanted or suffers some other fundamental flaw.
But there's still a lot of programming out there which requires originality.
Speaking personally, I never was nor ever will be too interested in the former variety.
What I miss is having other people who likes to think and not always pushing for shallow results
Most of the OP article also resonated with me as I bounce back and forth between learning (consuming, thinking, pulling, integrating new information) to building (creating, planning, doing) every few weeks or months. I find that when I'm feeling distressed or unhappy, I've lingered in one mode or the other a little too long. Unlike the OP, I haven't found these modes to be disrupted by AI at all, in fact it feels like AI is supporting both in ways that I find exhilarating.
I'm not sure OP is missing anything because of AI per se, it might just be that they are ready to move their focus to broader or different problem domains that are separate from typing code into an IDE?
For me, AI has allowed me to probe into areas that I would have shied away from in the past. I feel like I'm being pulled upward into domains that were previously inaccessible.
I use Claude on a daily basis, but still find myself frequently hand-writing code as Claude just doesn't deliver the same results when creating out of whole cloth.
Claude does tend to make my coarse implementations tighter and more robust.
I admittedly did make the transition from software only to robotics ~6 years ago, so the breadth of my ignorance is still quite thrilling.
Which spec? Is there a spec that says if you use a particular set of libraries you’d get less than 10 millisecond response? You can’t even know that for sure if you roll your own code, with no 3rd party libraries.
Bugs are by definition issues arise when developers expect they code to do one thing, but it does another thing, because of unforeseen combination of factors. Yet we all are ok with that. That’s why we accept AI code. They work well enough.
Your work is influenced by the medium by which you work. I used to be able to tell very quickly if a website was developed in Ruby on Rails, because some approaches to solve a problem are easy and some contain dragons.
If you are coding in clay, the problem is getting turned into a problem solvable in clay.
The challenge if you are directing others (people or agents) to do the work is that you don't know if they are taking into account the properties of the clay. That may be the difference between clean code - and something which barely works and is unmaintainable.
I'd say in both cases of delegation, you are responsible for making sure the work is done correctly. And, in both cases, if you do not have personal experiences in the medium you may not be prepared to judge the work.
I still love the work, but to say I’m disillusioned by the industry is an understatement.
i was working for months on an entity resolution system at work. i inherited the basic algo of it: Locality Sensitive Hashing. Basically breaking up a word into little chunks and comparing the chunk fingerprints to see which strings matched(ish). But it was slow, blew up memory constraints, and full of false negatives (didn't find matches).
of course i had claude seek through this looking to help me and it would find things. and would have solutions super fast to things that I couldn't immediately comprehend how it got there in its diff.
but here's a few things that helped me get on top of lazy mode. Basically, use Claude in slow mode. Not lazy mode:
1. everyone wants one shot solutions. but instead do the opposite. just focus on fixing one small step at a time. so you have time to grok what the frig just happened. 2. instead of asking claude for code immediately, ask for more architectural thoughts. not claude "plans". but choices. "claude, this sql model is slow. and grows out of our memory box. what options are on the table to fix this." and now go back and forth getting the pros and cons of the fixes. don't just ask "make this faster". Of course this is the slower way to work with Claude. But it will get you to a solution you more deeply understand and avoid the hallucinations where it decides "oh just add where 1!=1 to your sql and it will be super fast". 3. sign yourself up to explain what you just built. not just get through a code review. but now you are going to have a lunch and learn to teach others how these algorithms or code you just wrote work. you better believe you are going to force yourself to internalize the stuff claude came up with easily. i gave multiple presentations all over our company and to our acquirers how this complicated thing worked. I HAD TO UNDERSTAND. There's no way I could show up and be like "i have no idea why we wrote that algorithm that way". 4. get claude to teach it to you over and over and over again. if you spot a thing you don't really know yet, like what the hell is is this algorithm doing. make it show you in agonizingly slow detail how the concept works. didn't sink in, do it again. and again. ask it for the 5 year old explanation. yes, we have a super smart, over confident and naive engineer here, but we also have a teacher we can berate with questions who never tires of trying to teach us something, not matter how stupid we can be or sound.
Were there some lazy moments where I felt like I wasn't thinking. Yes. But using Claude in slow mode I've learned the space of entity resolution faster and more thoroughly than I could have without it and feel like I actually, personally invented here within it.
But at that point, will I even have the ability to distinguish a good solution from a bad one? How would I know, if I’ve been relying on AI to evaluate if ideas are good or not? I’d just be pushing mediocre solutions off as my own, without even realising that they’re mediocre.
Hence why a lot of software development is gluing libraries together these days.
I think perhaps moving the goal posts to demand better quality and performance might force people who rely on AI to "think hard". Like your app works fine, now make it load in under a second on any platform.
I'm not sure how you live and work in the US, but here in Sweden, in my experience, it's more focused on results than sitting at your desk 9-5.
So AI does enable me to take more free time, be outside more when the sun is out, because I finish my tasks faster.
I'm just afraid that managers will start demanding more, demand that we increase our output instead of our work life balance. But in that case I at least have the seniority to protest.
I was a software and systems developer on cool shit, but I realized I enjoyed solving hard problems more than how I solved them. That led me to a role that is about solving hard problems. Sometimes I still use coding to do it, but that's just one tool of many.
Then I tried to push through 50000 documents, it crashed and burned like I suspected. It took one day to go from my second more complicated but more scalable spec where I didn’t depend on an AWS managed service to working scalable code.
It would have taken me at least a week to do it myself
One of the benefits of LLM usage is to figure out the boundaries of your own knowledge and that of humanity's existing knowledge--at least for the LLM's training data.
Enumerating through existing options and existing solutions to problems gets you to the knowledge boundary sooner--where the real work begins! While faster with LLMs, I don't see this process as much different than bouncing ideas off of colleagues (and critiquing your own thoughts).
However, the difference is likely human's unpredictable ability to apply creativity throughout the process...such that a new solution may arise at any point and leap-frog existing solutions/explanations. (Think Einstein taking known data from Lorentz, Michelson, Morley plus Maxwell's equations on light and coming up with special relativity.)
I so severely doubt this to the point I'd say this statement is false.
As we go toward the past art was expensive and rare. Better quality landscape/portraits were exceptionally rare and really only commissioned by those with money, which again was a smaller portion of the population in the time before cameras. It's likely there are more high quality paintings now per capita than there were ever in the past, and the issue is not production, but exposure to the high quality ones. Maybe this is what you mean by 'miss out'?
In addition the general increase in wealth coupled with the cost of art supplies dropping this opens up a massive room for lower quality art to fill the gap. In the past canvas was typically more expensive so sucky pictures would get painted over.
Just like people more, and have better meetings.
Life is what you make it.
Enjoy yourself while you can.
For me, thinking about an extremely technical TCS problem, for example, is my version of actively, tirelessly thinking hard. I'm logging a ton of observations, trying new ideas and hypotheses, using a mix of computer simulation and math to try and arrive at a concrete framing and answer.
On the other end of the specturm, I have philosophy. It's definitely a different type of hard. Most of my "Aha!" moments come from when I realize I've been strawmanning some argument, and not actually understanding what the person is saying. Why is the person saying this, relative to what, why is this a new observation, etc. Things are so amorphus and you can tweak the problem parameters in so many ways, and it's really tempting to either be too fluid and pretend you understand the thinker (because it's a subset of some conception you already have), or be too rigid and dissolve the thinker as a category error / meaningless. I've never felt the same feeling as I did when doing TCS research, but the feeling was definitely hard thinking nonetheless.
In terms of extremely nitty-gritty technical things, like linker bullshit and linux kernel programming, I'm much more familiar with, and these things are more about reading documentation (because the tool won't behave like you want it to) and iteration / testing (because... the tool won't behave like you want it to, so you need to make sure it behaves like you want it to!). This is also a type of thinking - I would call it hard as in the physiological response I have is similar to that of research in the very bad moments, but in terms of my lofty ideals, I don't want to call this hard.... it's very "accidental" complexity, but it's what I get paid to do :/
At work, you have a huge idea space to consider, both problem and solution framings, mixing in "bullshit" constraints like business ones. You also throw in the real-time aspect of it, so I can't just either armchair on a problem for a month (unlike Philosophy) or deep dive on a problem for a month (unlike research). I'm technically doing the third type of programming right now, but we'll see how long that lasts and I get put on a new project.
I'm not even sure if there's a clean demarcation between any of these. These are certainly better than brainrotting youtube though.
That said, the framing feels a bit too poetic for engineering. Software isn't only craft, it's also operations, risk, time, budget, compliance, incident response, and maintenance by people who weren't in the room for the "lump of clay" moment. Those constraints don't make the work less human; they just mean "authentic creation" isn't the goal by itself.
For me the takeaway is: pursue excellence, but treat learning as a means to reliability and outcomes. Tools (including LLMs) are fine with guardrails, clear constraints up front and rigorous review/testing after, so we ship systems we can reason about, operate, and evolve (not just artefacts that feel handcrafted).
It's a different type of thinking in my opinion, more "systems" thinking.
They’re destroying the only thing I like about my job - figuring problems out. I have a fundamental impedance mismatch with my company’s desires, because if someone hands me a weird problem, I will happily spend all day or longer on that problem. Think, hypothesize, test, iterate. When I’m done, I write it up in great detail so others can learn. Generally, this is well-received by the engineer who handed the problem to me, but I suspect it’s mostly because I solved their problem, not because they enjoyed reading the accompanying document.
There's no need to create another serialization format or a JavaScript framework. You now have more time to direct your focus onto those problems that haven't yet been solved, or at least haven't been solved well.
A question that might be hard to digest: was that "thinker" really a thinker, or a well-disguised re-inventor?
I wholeheartedly disagree but I tend to believe that's going to be highly dependent on what type of developer a person is. One who leans towards the craftsmanship side or one who leans towards the deliverables side. It will also be impacted by the type of development they are exposed to. Are they in an environment where they can even have a "lump of clay" moment or is all their time spent on systems that are too old/archaic/complex/whatever to ever really absorb the essence of the problem the code is addressing?
The OP's quote is exactly how I feel about software. I often don't know exactly what I'm going to build. I start with a general idea and it morphs towards excellence by the iteration. My idea changes, and is sharpened, as it repeatedly runs into reality. And by that I mean, it's sharpened as I write and refactor the code.
I personally don't have the same ability to do that with code review because the amount of time I spend reviewing/absorbing the solution isn't sufficient to really get to know the problem space or the code.
My work’s codebase is 30 years of never-refactored C++. It takes an exceptional amount of focus and thinking to get even a cursory understanding of anything a particular method or class does or why it’s there.
But for languages like C, I agree with you (as long as function pointers aren’t used abused).
So... you didn't have to do that prior to using agents?
It might be difficult to figure out what that is, and some folks will fail at it. It might not be code though.
Not every programming task needs to be a research project. There are plenty of boring business problems that needs the application of computing to automate. And it’s been a decent way to make a living for a while.
It’s great getting a good problem to chew on.
I try to keep a small percentage of my time occupied by one or two good ones. If I’m always bored it’s a sign I could be doing better. And I like being at my best.
Here are some clipped comments that I pulled from the overall post
> I don't get it.
> I'm using LLMs to code and I'm still thinking hard.
> I don't. I miss being outside, in the sun, living my life. And if there's one thing AI has done it's save my time.
> Then think hard? Have a level of self discipline and don’t consistently turn to AI to solve your problems.
> I am thinking harder than ever due to vibe coding.
> Skill issue
> Maybe this is just me, but I don't miss thinking so much.
The last comment pasted is pure gold, a great one to put up on a wall. Gave me a right chuckle thanks!!!
I've not seen this take yet, but this is exactly how I feel. I do not yet know what I want to do, and my parts of my personality are no longer satisfied by coding. I'm thinking we need some kind of community of people like us where we can discuss these things.
I bring these up with others, and I find that most people around me are just builders.
When I play sudoku with an app, I like to turn on auto-fill numbers, and auto-erase numbers, and highlighting of the current number. This is so that I can go directly to the crux of the puzzle and work on that. It helps me practice working on the hard part without having to slog through the stuff I know how to do, and generally speaking it helps me do harder puzzles than I was doing before. BTW, I’ve only found one good app so far that does this really well.
With AI it’s easier to see there are a lot of problems that I don’t know how to solve, but others do. The question is whether it’s wasteful to spend time independently solving that problem. Personally I think it’s good for me to do it, and bad for my employer (at least in the short term). But I can completely understand the desire for higher-ups to get rid of 90% of wheel re-invention, and I do think many programmers spend a lot of time doing exactly that; independently solving problems that have already been solved.
Even if you "think just as hard" the act of physically writing things down is known to improve recall, so you're skipping a crucial step in understanding.
And when I review code, it's a different process than writing code.
These tradeoffs may be worth it, because we can ask the tools to analyze things for us just as easily as we can ask them to create things for us, but your own knowledge and understanding of the system is absolutely being degraded when working this way.
I dont write just code, i do as well network engineering, network architecture. There is stuff that, at least until now, i cannot vibe.
I have a pet project where I dont use AI. It moves slowly but, to code, is my hobby.
AI isn’t the only thing that changes how you attend to life.
I think hard about this with a notebook and a pencil and a coffee. And I spend weeks and sometimes months thinking about this. I go deep. And then the actual coding is just the grunt work. I don’t hate it but I don’t love it. I couldn’t care less what language is written in as long as it accomplishes my goal. So AI works great for me in this step.
I think you can still use AI and think deeply. It just depends on your mindset and how you use it.
With music this is much more pronounced because most people are musically illiterate, so even the basic mistakes while dragging characteristics over diffs becomes invisible. It's an interesting phenomenon I agree, but it says more about lack of taste and illiteracy of the common individual.
But on the point of "thinking hard", with music and artistic production in general, individuals (human with soul, not npc) crave for ideas and perspective. It is the play, the relationship between ideas that are hard to vocalize and describe but can be provocative. Because we cannot describe or understand, we have no choice other than provoke into another a similar contemplation.
But make no mistake, nobody is enjoying llm slop. They have fantasies that now they can produce something of value, or delegate this production. If this becomes true, instantly they lose and everyone goes directly to the source.
Art is specifically about communicating the inconceivable, cannot be delegated. If the tool is sufficient to produce art, then the expression is of the tool itself, now they ARE.
> "Long story short, I realized that my struggle stems from my lack of training"
you say, and then you explain exactly where your struggle is, and it's not lack of training:
> "it's a fear and urge of closing the game and leaving it "for later" the moment I discover that I've either done something wrong"
There you are. It's that fear. Fear comes from your brain modeling the world and predicting the future, and predicting a bad future and generating fear (or anxiety, or other bad feelings) to change your behaviour and avoid that future. So find why "doing something wrong" is making you predict a bad future, what bad future, and then find a way to fix that where you can be wrong but it isn't the end of your world, and then this fear and urge won't appear anymore.
1) one of those things like "finding out I've done something wrong" happens, or you imagine it happening.
2) you imagine some kind of negative future event that follows on from doing something wrong, this is a habit of thinking that you learned so it might be fast, almost sub-conscious, flicker of ideas and thoughts.
3) from those, your brain generates bad feelings (feelings are a high level sweeping way to change our behaviour without thinking about tons of details).
4) you get the urge to push it away and close the game, so you can avoid those future events happening, so that can save you from future badness, bad feelings calm down (but different ones may appear like guilt, shame, inadequacy, etc).
5) you don't consciously notice this happening, so you make up some other reason that sounds nice and plausible and doesn't involve changing anything ("lack of training").
6) repeat this mental habit for a long time, maybe the rest of your life.
> "postponed the inevitable exercise of a very useful mental habit: To navigate uncertainty, pause and reflect, plan, evaluate a trade-off or 2 here and there."
That's not where your problem is, you aren't lacking the ability to evaluate a trade-off, you're lacking the ability to go wrong. Being able to evaluate any situation so you never ever go wrong isn't possible, that can't be the answer. The only answer can be from becoming okay with going wrong [I hope you will feel an automatic rejection here, how can it possibly be OK to be wrong and screw something up? I don't want to be OK with going wrong! See?].
Step one of a fix is to focus your attention on "it's a fear and urge of closing the game and leaving it "for later" the moment I discover that I've either done something wrong" and push into that. Remember or find a specific example of that which makes the feeling come up - what task were going wrong brings up that feeling - write it down.
Step two, look at your thoughts at that moment, what are you telling yourself will happen next that is so automatic and habitual and reflex and fast that you can barely notice it happening, but is basically the horrible future you're always worrying about? [If it helps, imagine a cartoon character who is just like you, in that "oh no I screwed up" situation with a thought bubble above their head, and write in what they are thinking which is oh so very relatable to you].
It will be some typical human thing like "it will prove I'm not smart and my fiance or parents won't love me" or "I will become unemployed and homeless and get sick on the streets and die" or "it will prove my lifelong fear that I don't deserve respect and am inferior to everyone else" but the details will be unique to you and what you fear and worry about.
And the next Dr Burns specific step is that you need to see why you are holding onto that pattern of thought, it's not just stopping you from making progress in projects, it's got some silver lining that is protecting and upholding some other ideals you value, and you don't want to let this pattern change and let go of the fear if it means losing something else equally or more valuable (e.g. "if I don't fear becoming homeless then I won't push myself to work harder and will waste my life as a drifter", or what I said earlier "I don't want to be okay with going wrong, people who screw things up and don't care are LOSERS who make everything worse!", again, the details will be specific to you).
And then to do some therapy techniques to work out how to unpick all this, and change it, in a way that keeps the ideals you want, and releases the sticking points you don't want.
> "I cannot imagine how someone would claim that removing an exercise from my daily gym visit will not result in weaker muscles."
If you do weighted abs exercises first, then you can't do good squats because your abs hurt, removing the abs work might be an overall gain. If you do too much everyday so you can't recover properly in one night of rest, removing some more intense work might be an overall gain. If you try to do too many exercises in a rush so you can't do a good job on any of them, removing one so you can do fewer, better quality, might be an overall gain. If you hate one exercise and it puts you in a bad mood and every month or two you get a minor injury from it that knocks your progress back, removing it might be a mood boost and an overall gain. If you remove a small targetted exercise and replace it with a larger compound exercise, it might be an overall gain.
"[I]f a scientist proposes an important question and provides an answer to it that is later deemed wrong, the scientist will still be credited with posing the question. This is because the framing of a fundamentally new question lies, by definition, beyond what we can expect within our frame of knowledge: while answering a question relies upon logic, coming up with a new question often rests on an illogical leap into the unknown."
You can imagine the artisans who made shirts saying the exact same thing as the first textile factories became operational.
Humans have been coders in the sense we mean for a matter of decades at most - a blip in our existence. We’re capable of far more, and this is yet another task we should cast into the machine of automation and let physical laws do the work for us.
We’re capable of manipulating the universe into doing our bidding, including making rocks we’ve converted into silicones think on our behalf. Making shirts and making code: we’re capable of so much more.
90-99% of programming is a waste of time. Most apps today have less than a single spreadsheet page of actual business logic. The rest is boilerplate. Conjuring up whatever arcane runes are needed to wake a slumbering beast made of anti-patterns and groupthink.
For me, AI offers the first real computer that I've had access to in over 25 years. Because desktop computing stagnated after the 2000 Dot Bomb, and died on the table after the iPhone arrived in 2007. Where we should have symmetric multiprocessing with 1000+ cores running 100,000 times faster for the same price, we have the same mediocre quad core computer running about the same speed as its 3 GHz grandfather from the early 2000s. But AI bridges that divide by recruiting video cards that actually did increase in speed, albeit for SIMD which is generally useless for desktop computing. AI liberates me from having to mourn that travesty any longer.
I think that people have tied their identity to programming without realizing that it's mostly transcribing.
But I will never go back to manual entry (the modern equivalent of punch cards).
If anything, I can finally think deeply without it costing me everything. No longer having to give my all just to tread water as I slowly drown in technical debt and deadlines which could never be met before without sacrificing a part of my psyche in the process.
What I find fascinating is that it's truly over. I see so clearly how networks of agents are evolving now, faster than we can study, and have already passed us on nearly every metric. We only have 5-10 years now until the epiphany, the Singularity, AGI.
It's so strange to have worked so hard to win the internet lottery when that no longer matters. People will stop buying software. Their AI will deliver their deepest wish, even if that's merely basic resources to survive, that the powers that be deny us to prop up their fever dream of late-stage crony capitalism under artificial scarcity.
Everything is about to hit the fan so hard, and I am so here for it.
Frameworks and compilers are designed to be leak-proof abstractions. Any way in which they deviate from their abstract promise is a bug that can be found, filed, and permanently fixed. You get to spend your time and energy reasoning in terms of the abstraction because you can trust that the finished product works exactly the way you reasoned about at the abstract level.
LLMs cannot offer that promise by design, so it remains your job to find and fix any deviations from the abstraction you intended. If you fell short of finding and fixing any of those bugs, you've just left yourself a potential crisis down the line.
[Aside: I get why that's acceptable in many domains, and I hope in return people can get why it's not acceptable in many other domains]
All of our decades of progress in programming languages, frameworks, libraries, etc. has been in trying to build up leak-proof abstractions so that programmer intent can be focused only on the unique and interesting parts of a problem, with the other details getting the best available (or at least most widely applicable) implementation. In many ways we've succeeded, even though in many ways it looks like progress has stalled. LLMs have not solved this, they've just given up on the leak-proof part of the problem, trading it for exactly the costs and risks the industry was trying to avoid by solving it properly.
There can be. But you’d have to map the libraries to opcodes and then count the cycles. That’s what people do when they care about that particular optimization. They measure and make guaranties.
A silly example to illustrate the kind of guy I am: when I'm watching a show or movie, I'll often wonder where I've seen an actor before. A "normal" person, like my wife, would just look it up on IMDB and be done with it. But I almost always insist on rifling through all the dustiest corners of my brain to figure it out. Even if it takes me a day or two of thinking about it off and on. Because to me, the satisfaction of doing it myself is worth it.
You are advocating for a particular (more inclusive) definition for 'thinker' which clashes with the author's, but his is equally valid. You're both just gesturing at different concepts and suggesting they should be tagged to that word.
OP raises a particular way to classify something about personalities, says he finds it quite interesting/discriminative, and calls that kind of personality a "thinker". You instead consider a "thinker" a broader category.
That feels like an empty disagreement (nobody is right on such matters) - the real debatable question of substance is whether the _concept_ OP is gesturing at has interesting discriminative power. That concept is something like "personalities which seem to value the act of thinking through a problem/problem solving itself rather than downstream result".
I would very much like to know the kind of app you’ve seen. It’s very hard to see something like mpv, calibre, abiword, cmus,… through that lens. Even web apps like forgejo, gonic, sr.ht, don’t fit into that view.
But really, most of us who personally feel sad about the work being replaced by LLMs can still act reasonable, use the new tooling at work like a good employee, and lament about it privately in a blog or something.
Most comments in the thread are missing that critical point. Yes you are achieving the end goal, perhaps faster. And yes, built projects (perhaps worse quality) are still better than not built projects.
But take the home cooking vs ordering at restaurant example:
At a restaurant you can prompt for exactly what you want to eat and it will be made for you without you actually having to do it. When the food comes out you can taste it and notice it is missing some flavor. Problem is, you don't know what is missing. If you are knowledgeable about the dish, you can prompt for additional spices or flavours.
When I cook, I try all the ingredients before I add them and then taste the result so I know how an addition changes the final result.
I am now a much better cook because of this because I can make substitutions on the fly. Dish missing sweetness? Carrots, baby red peppers, beets etc can all substitute - never even reach for sugar. Already added a lot of salt but still feels like more needed? Add sour flavours like lemon juice.
Sure, reliance on AI will end up with more things built but you'll have a generation of "cooks" that don't know why you add a bay leaf or two to soup, except that it's always in recipes.
In psychological terms, he's saying he has a need to solve hard problems in order to validate his identity and make himself feel good. At some point in his past he experienced some psychological trauma, and this hard-problem-defeating became his coping mechanism.
"That satisfaction is why software engineering was initially so gratifying."
He became a software engineer to gratify his need to solve hard problems, to validate his identity, and make himself feel good. If he stops needing to engineer difficult software, there goes his identity, his self-worth, his good feeling.
"But recently, the number of times I truly ponder a problem for more than a couple of hours has decreased tremendously. Yes, I blame AI for this."
When he runs up against something that takes away this thing that validates him, he feels de-valued. Rather than recognize that AI is making his life easier, freeing him up from mental labor, he's experiencing it as a loss, almost an attack.
"If I can get a solution that is “close enough” in a fraction of the time and effort, it is irrational not to take the AI route. And that is the real problem: I cannot simply turn off my pragmatism."
Now this link of hard work with his identity is becoming a problem. He's going to feel bad because he doesn't know how to deal with his life being easier now. This is a reason to address it head on with therapy, and a re-evaluation of what gives him value as a person, so that having an easier life doesn't feel bad.
Isn't this also an overstatement, and the problem is worse. That is - the code being handed back is a great prototype, needs polishing/finishing, and is ignorant of obvious implicit edge cases unless you explicitly innumerate all of them in your prompts??
For me, the state of things reminds me of a bad job I had years ago.
Worked with a well-regarded long tenured but truculent senior engineer who was immune to feedback due to his seniority. He committed code that either didn't run, didn't past tests, or implemented only the most obvious happy path robotically literal interpretation of requirements.
He was however, very very fast... underbidding teammates on time estimates by 10x.
He would hand back the broken prototype and we'd then spend the 10x time making his code actually something you can run in production.
Management kept pushing this because he had a great reputation, promised great things, and every once in a while did actually deliver stuff fast. It took years for management to come around to the fact that this was not working.
But it is also true that most programming tedious and hardly enriching for the mind. In those cases, LLMs can be a benefit. When you have identified the pattern or principle behind a tedious change, an LLM can work like a junior assistant, allowing you to focus on the essentials. You still need to issue detailed and clear instructions, you still need to verify the work.
Of course, the utility of LLMs is a signal that either the industry is bad at abstracting, or that there's some practical limit.
If I wanted to work on electric power systems I would have become an electrician.
(The transition is happening.)
For instance, I think about the pervasive interstate overpass bridge. There was a time long ago when building bridges was a craft. But now I see like ten of these bridges every day, each of which is better - in the sense of how much load they can support and durability and reliability - than the best that those craftsmen of yore could make.
This doesn't mean I'm in any way immune to nostalgia. But I try to keep perspective, that things can be both sad and ultimately good.
I personally think that we're not done evolving really, and to call it quits today would leave alot of efficiency and productivity on the table
The risk of LLMs laying more of these bricks isn't just loss of authenticity and less human elements of discovery and creation, it's further down the path of "there's only one instruction manual in the Lego box, and that's all the robots know and build for you". It's an increased commodification of a few legacy designers' worth of work over a larger creative space than at first seems apparent.
The hard problems should be solved with our own brains, and it behooves us to take that route so we can not only benefit from the learnings, but assemble something novel so the business can differentiate itself better in the market.
For all the other tedium, AI seems perfectly acceptable to use.
Where the sticking point comes in is when CEOs, product teams, or engineering leadership put too much pressure on using AI for "everything", in that all solutions to a problem should be AI-first, even if it isn't appropriate—because velocity is too often prioritized over innovation.
And worse: with few opportunities to grow their skills from rigorous thinking as this blog post describes. Tech workers will be relegated to cleaning up after sloppy AI codebases.
Granted, you would learn a lot more if you had pieced your ideas together manually, but it all depends on your own priorities. The difference is, you're not stuck cleaning up after someone else's bad AI code. That's the side to the AI coin that I think a lot of tech workers are struggling with, eventually leading to rampant burnout.
Software developers can use the exact same "lego block" abstractions ("this code just multiplies two numbers") and tell very different stories with it ("this code is the formula for force power", "this code computes a probability of two events occurring", "this code gives us our progress bar state as the combination of two sub-processes", etc).
LLMs have only so many "stories" they are trained on, and so many ways of thinking about the "why" of a piece of code rather than mechanical "what".
What he told me is that he loves thinking about the design of the code in his head, picking the best piece for each part of "the machine" and assembling it design in his head.
After doing that he loathed the act of translating that pure design into code. He told me it felt like pushing all that design through a thin tube through sheer force against syntax, wrong library versions, compiler errors, complex IDEs...
So for him, this is the best scenario possible.
I'm more a of a builder but after talking with him and reading the OP post maybe thinkers come in various shapes.
It isn't an abstraction like assembly -> C. If you code something like: extract the raw audio data from an audio container, it doesn't matter if you write it in assembly, C, Javascript, whatever. You will be able to visualize how the data is structured when you are done. If you had an agent generate the code the data would just be an abstraction.
It just isn't worth it to me. If I am working with audio and I get a strong mental model for what different audio formats/containers/codecs look like who knows what creative idea that will trigger down the line. If I have an agent just fix it then my brain will never even know how to think in that way. And it takes like... a day...
So I get it as a optimized search engine, but I will never just let it replace understanding every line I commit.
I know Ralphs like to do all the prompting and no thinking.
Will a company pay me more for knowing those details? Will I be more affectively able to architect and design solutions that a company will pay my employer to contract me to do and my company pays me? They pay me decently not because I “codez real gud”. They pay me because I can go from empty AWS account, empty repo and ambiguous customer requirements to a working solution (after spending time talking to a customer) to a full well thought out architecture + code on time on budget and that meets requirements.
I am not bragging, I’m old those are table stakes to being able to stay in this game for 3 decades
I actually had to think really, really hard to keep up with the idiot savant as it cranked out code.
Correctness was extremely important for this feature. Claude would consistently make subtle mistakes, and I needed to catch them to keep things from going off the rails. I could have done it myself, but it would have taken MUCH longer.
I essentially compressed a week’s worth of work into a few hours, and my brain paid the price.
So yeah. You can use AI to replace your thinking, or you can use it to push yourself to your max potential.
The idea that you lose a ton of knowledge when you experience things through intermediaries is an old one.
LLM-aided coding is not a higher level tool. It is not writing in C vs writing in assembly. It is closer to asking other people to do something for you and (supposedly, hopefully, although how many people really do it?) reviewing the result.
The thing is, some people already disliked the thinking involved in programming and are welcoming these tools. That's fine, but you don't get to equate it with programming.
The first principle is that you must not fool yourself--and you are the easiest person to fool. So you have to be very careful about that.
- FeynmanMainländer’s statement—“God has died and his death was the life of the world”—is not a metaphor for cultural decline, cognitive atrophy, or the loss of intellectual depth. It is a literal metaphysical claim. In Mainländer’s philosophy, the Absolute unity of being actively annihilates itself, and the existence of the world is the irreversible consequence of that ontological self-destruction. The death he speaks of is not contingent, regrettable, or historically situated; it is necessary, total, and final. There is no nostalgia in Mainländer, no sense of loss that might have been avoided, and no implied call to recover what was lost. On the contrary, preservation, striving, depth, and effort are all expressions of the same will-to-be that Mainländer ultimately rejects.
By contrast, the argument being made in the essay is explicitly contingent and experiential. It concerns a personal and cultural shift in how intellectual work is done: the replacement of prolonged cognitive struggle with tools that optimize for speed, efficiency, and “good enough” solutions. The author is not claiming that deep thinking had to die for progress to occur, nor that its disappearance is metaphysically necessary. Quite the opposite: the tone is one of regret, ambivalence, and unresolved tension. Something valuable has been eroded, perhaps unnecessarily, and the loss feels meaningful precisely because it might have been otherwise.
This is where the quote fails. Mainländer’s framework leaves no room for lament. If “God” dies in his system, that death is the very condition of possibility for everything that follows. To mourn it would be incoherent. Using this quote to frame a loss that is psychological, cultural, and potentially reversible imports an apocalyptic metaphysics that undermines the author’s own point. It elevates a specific, historically situated concern into a cosmic necessity—and in doing so, distorts both.
What the essay is really circling is not the death of an absolute, but the displacement of a mode of attention: slow, effortful, internally transformative thinking giving way to instrumental cognition. That intuition has a long and well-matched philosophical lineage, but it is not Mainländer’s.
Two examples of quotes that align far more precisely with what the author seems to want to express:
1. “The most thought-provoking thing in our thought-provoking time is that we are still not thinking.” —Martin Heidegger This captures exactly the concern at stake: not the impossibility of thought, but its quiet displacement by modes of engagement that no longer demand it.
2. “Attention is the rarest and purest form of generosity.” —Simone Weil Here, the loss is not metaphysical annihilation but ethical and cognitive erosion—the fading of a demanding inner posture that once shaped understanding itself.
Either of these frames the problem honestly: as a tension between convenience and depth, productivity and transformation, speed and understanding. Mainländer’s quote, powerful as it is, belongs to a radically different conversation—one in which the value of effort, preservation, and even thinking itself has already been metaphysically written off.
The quote sounds right because it is dramatic, but it means something far more extreme than what the author is actually claiming. The result is rhetorical force at the expense of conceptual fidelity.
So, tackle other problems. You can now do things you couldn't even have contemplated before. You've been handed a near-godlike power, and all you can do is complain about it?
If my C compiler sometimes worked and sometimes didn't I would just mash compile like an ape until it started working.
I am building a project that AI is incapable of doing and I really need to think hard and progress slowly but hopefully it will create real value.
But then OP says stuff like:
> I am not sure if there will ever be a time again when both needs can be met at once.
In my head that translates to "I don't think there will ever be a time again when I can actually ride my bike for more than 100 feet." At which point you probably start getting responses more like "I don't get it" because there's only so much empathy you can give someone before you start getting a little frustrated and being like "cmon it's not THAT bad, just keep trying, we've all been there".
Do we? I don't think people appreciate tarantino more than gangster movies. Don't think people appreciate tarantino more than pulp fiction. Frankly, tarantino doesn't factor in at all.
> It is about the process.
I never considered the process when watching pulp fiction. It's the finished product, not the process, that matters.
Put it this way, we know who tarantino is because of pulp fiction. Not the other way around.
With LLM Agents I can't seem to do it, as waiting for the agent to finish working just doesn't tickle my brain in the right way. I feel... distracted, I guess?
Software engineering is all about making sure the what actually solves the why, making the why visible enough in the what so that we can modify the latter if the former changes (it always does).
Current LLM are not about transforming a why into a what. It’s about transforming an underspecified what into some what that we hope fits the why. But as we all know from the 5 Why method, why’s are recursive structure, and most software engineer is about diving into the details of the why. The what are easy once that done because computers are simple mechanisms if you chose the correct level of abstraction for the project.
> I keep trying to ride a bike but I keep falling off
I do not think this analogy is apt.
The core issue is that AI is taking away, or will take away, or threatens to take away, experiences and activities that humans would WANT to do.
The article is lamenting the disappearing of something meaningful for the OP. One can feel sad for this alone. It is not an equation to balance: X is gone but Y is now available. The lament stands alone. As the OP indicates with his 'pragmatism' we now collectively have little choice about the use of AI. The flood waters do not ask they take everyone in their path.
One of the reasons Barry Lyndon is over 50 years old and still looks like no other movie today is because Kubrick tracked down a few lenses originally designed for NASA and had custom mounts built for them to use with cinema cameras.
https://neiloseman.com/barry-lyndon-the-full-story-of-the-fa...
> Popular and highly acclaimed games not at crap because they didn't write their own physics engine (Zelda uses Havok)
Super Mario Bros is known for having a surprisingly subtle and complex physics system that enabled the game to feel both challenging and fair even for players very new to consoles. Celeste a newer game also famous for being very difficult yet not feeling punishing does something similar:
https://maddymakesgames.com/articles/celeste_and_towerfall_p...
> or their own game engine (Plenty of great games use Unreal or Unity)
And Minecraft doesn't, which is why few other games at the time of its release felt and played like it.
You're correct that no one builds everything from scratch all the time. However, if all you ever do is cobble a few pre-made things together, I think you'll discover that nothing you make is ever that interesting or enduring in value. Sure, it can be useful, and satisfying. But the kinds of things that really leave a mark on people, that affect them deeply, always have at least some aspect where the creator got obsessive and went off the deep end and did their own thing from scratch.
Further, you'll never learn what a transformative experience it can be to be that creator who gets obsessive about a thing. You'll miss out on discovering the weird parts of your own soul that are more fascinated by some corner of the universe than anyone else is.
I have a lot of regrets in my life, but I don't regret the various times I've decided I've deeply dug into some thing and doing it from scratch. Often, that has turned out later to be some of the most long-term useful things I've done even though it seemed like a selfish indulgence at the time.
Of course, it's your life. But consider that there may be a hidden cost to always skimming along across the tops of the stacks of things that already exist out there. There is growth in the depths.
If you are a cook wanting to open a restaurant, you will be delegating, the same thing with AI. If you are fine only doing what your hands can possibly do in the time allotted, go ahead and cook in your kitchen.
But I need to make money to be able to trade for the food I eat.
But I would sooner compare this engineer class to something of a small bourgeoisie swallowed by a yet larger one, especially in the United States.
Harvard Business Review and probably hundreds of other online content providers provide some simple rules for meetings yet people don't even do these.
1. Have a purpose / objective for the meeting. I consider meetings to fall into one of three broad categories information distribution, problem solving, decision making. Knowing this will allow the meeting to go a lot smoother or even be moved to something like an email and be done with it.
2. Have an agenda for the meeting. Put the agenda in the meeting invite.
3. If there are any pieces of pre-reading or related material to be reviewed, attach it and call it out in the invite. (But it's very difficult to get people to spend the time preparing for a meeting.)
4. Take notes during the meeting and identify any action items and who will do them (preferably with an initial estimate). Review these action items and people responsible in the last couple of minutes of the meeting.
5. Send out the notes and action items.
Why aren't we doing these things? I don't know, but I think if everyone followed these for meetings of 3+ people, we'd probably see better meetings.
That’s the whole point. You become a customer of an AI service, you get what you want but it wasn’t done by you. You get money but not the feeling of accomplishment from cracking a problem. Like playing a video game following a solution or solving a crossword puzzle with google.
But beyond that, I have been thinking deeply about AI itself, which has all sorts of new problems. Permissions, verification, etc.
I don't really understand what that means:
1. If the cost of not thinking upfront is high, that means you need to think upfront.
2. If the cost of being wrong upfront is low, that means you don't need to think upfront.
To me, it looks like those assertions contradict each other.
I think this is the issue, who associates really hard problems with Software Engineering? You should've stuck with Physics, or pivoted to Math (albeit you don't get so much of the physical building with pure math). You did Software Engineering because you like money, with a little bit of thinking. ;)
I'm trying my best to adapt to being a "centaur" in this world. (In Chess it has become statistically evident that Human and Bot players of Chess are generally "worse" than the hybrid "Centaur" players.) But even "centaurs" are going to be increasingly taken for granted by companies, and at least for me the sense is growing that as WOPR declared about tic-tac-toe (and thermo-nuclear warfare) "a curious game, the only way to win is not to play". I don't know how I'd bootstrap an entirely new career at this point in my life, but I keep feeling like I need to try to figure that out. I don't want to just be a janitor of other people's messes for the rest of my life.
This sounds like approving bad/poor abstractions too prematurely and keep building on top of that.
What about the satisfaction that comes not with struggling but from the calmness of an elegant functional model that dynamically covers all the flows and all the edge cases you could (deep and slowly [1]) think about?
[1] maybe refining in different days, in the shower, after recovering breath in a hard set in a workout session, after a nap...
It isn't all great, skills that feel important have already started atrophying, but other skills have been strengthened. The hardest part is in being able to pace onself as well as figuring out how to start cracking certain problems.
AI assistance in programming is a service, not a tool. You are commissioning Anthropic, OpenAI, etc. to write the program for you.
This seems to be a common narrative, but TBH I don't really see it. Where is all the amazing output from this godlike power? It certainly doesn't seem like tech is suddenly improving at a faster pace. If anything, it seems to be regressing in a lot of cases.
LLMs are clumsy interns now, very leaky. But we know human experts can be leak-proof. Why can't LLMs get there, too, better at coding, understanding your intentions, reviewing automatically for deviations, etc.?
Thought experiment: could you work well with a team of human experts just below your level? Then you should be able to work well with future LLMs.
I think the point is that the finished product depends on the process.
And don’t forget, it’s more likely to find someone cheaper who can write the same prompts as you than people with the same kind of experience in cracking problems.
This mindset is a healthy and good one. It is built on training yourself, learning, and practicing a discipline of problem solving without giving up.
Persistence is something we build, not something we have. It must be maintained. Persistence is how most good in the world has been created.
Genius is worthless without the will to see things through.
The reason is that you no longer really know what's going on. (And yes, that feeling would be the same if C++ had as rich a library of packages as python for numerical analysis.)
If you are doing something that requires precision you need to know everything that is happening in that library. Also IIRC, I think not knowing what type something is bothered me at the time.
It's a carved wooden dragon that my dad got from Indonesia (probably about 50 years ago).
It's hard to appreciate, if you aren't holding it, but it weighs a lot, and is intricately carved, all over.
I guarantee that the carver used a Dremel.
I still have a huge amount of respect for their work. That wood is like rock. I would not want to carve it with hand tools.
There's just some heights we can't reach, without a ladder.
Aren't Legos known for their ability to enable creativity and endless possibilities? It doesn't feel that different from the clay analogy, except a bit coarser grained.
With an LLM, you put in a high-level description, and then check in the "machine code" (generated code).
Why not just use a library at that point? We already have support for abstractions in programming.
AI makes you the manager. The models are like GRAs or contract workers, maybe new to their fields but with tireless energy, and you need to be able to instruct them correctly and evaluate their outputs. None of them can do everything, and you'll need to carefully hire the ones you want based on the work you need, which means breaking workflows into batchable parts. If you've managed projects before, you've done this.
Right now, my focus is improving pipelines in composition and arrangement based on an artist's corpus. A lot of them just want to be more productive, and it's a slog to write, then break into parts, etc using modern notation software.
Those types of developers on the enterprise dev side - where most developers work - were becoming a commodity a decade ago and wages have been basically stagnant. Now those types of developers are finding it hard to stand out and get noticed.
The trick is to move “up the stack” and closer to the customer whether that be an internal customer or external customer and be able to work at a higher level of scope, impact and ambiguity.
https://www.levels.fyi/blog/swe-level-framework.html
It’s been well over a decade and 6 jobs ago that I had to do a coding interview to prove I was able “to codez real gud”, every job I’ve had since then has been more concerned with whether I was “smart and get things done”. That could mean coding, leading teams, working with “the business”, being on Zoom calls with customers, flying out to the customers site, or telling a PE backed company with low margins that they didn’t need a team of developers, they needed to outsource complete implementations to other companies.
I’ve always seen coding as grunt work. But the only way to go from requirements -> architectural vision -> result and therefore getting money in my pocket.
My vision was based on what I could do myself in the allotted time at first and then what I could do with myself + leading a team. Now it’s back to what I can do by myself + Claude Code and Codex.
As far as the first question, my “fun” during my adult life has come from teaching fitness classes until I was 35 and running with friends in charity races on the weekend, and just hanging out, spending time with my (now grown) stepsons after that and for the past few years just spending time with my wife and traveling, concerts, some “digital nomadding” etc
That's how I have been using AI the entire time. I do not use Claude Code or Codex. I just use AI to ask questions instead of parsing the increasingly poor Google search results.
I just use the chat options in the web applications with manual copy/pasting back and forth if/when necessary. It's been wonderful because I feel quite productive, and I do not really have much of an AI dependency. I am still doing all of my work, but I can get a quicker answer to simple questions than parsing through a handful of outdated blogs and StackOverflow answers.
If I have learned one thing about programming computers in my career, it is that not all documentation (even official documentation) was created equally.
yea these kind of problems are the exact ones that claude code is very good at one shotting these days. If you can descibe in detail what needs to be done without context switching or research, then describe it to the LLM and bam
I've also mostly worked at small companies so the high level requirements are never well defined :D
I haven’t counted cycles since programming assembly on a 65C02 where you cooks save a clock cycle by accessing memory in the first page of memory - two opcodes to do LDA $02 instead of LDA $0201
I think this is context dependent. Over time, within a single code base or ecosystem, incompatibilities between "close enough" solutions can add up and create a lot of problems and complexity, kinda like floating point inaccuracies. Especially if you're not going back and revisiting the structure/abstractions you've already got when you're adding/changing something.
There's another angle too, which is that time taken to improve your abilities isn't necessarily irrational, especially because that improvement is usually applicable in many different ways. It can be an investment.
Unfortunately, predicting which situations are worth it and which aren't is as hard as predicting anything else...
More to the point of the article, though, LLM-enthusiasts do seem to view it as a substitute for thinking. They're not augmenting their application of knowledge with shortcuts and fast-paths; they're entirely trusting the LLM to engineer things on its own. LLMs are great at creating the impression that they are suitable for this; after all, they are trained on tons of perfectly reasonable engineering data, and start to show all the same signals that a naïve user would use to tell quality of engineering... just without the quality.
Before LLMs once I was done with the design choices as you mention them - risks, constraints, technical debt, alternatives, possibilities, ... I cooked up a plan, and with that plan, I could write the code without having to think hard. Actually writing code was relaxing for me, and I feel like I need some relax between hard thinking sessions.
Nowadays we leave the code writing to LLMs because they do it way faster than a human could, but then have to think hard to check if the code LLM wrote satisfies the requirements.
Also reviewing junior developers' PRs became harder with them using LLMs. Juniors powered by AI are more ambitious and more careless. AI often suggests complicated code the juniors themselves don't understand and they just see that it works and commit it. Sometimes it suggests new library dependencies juniors wouldn't think of themselves, and of course it's the senior's role to decide whether the dependency is warranted and worthy of being included. Average PR length also increased. And juniors are working way faster with AI so we spend more time doing PR reviews.
I feel like my whole work somehow from both sides collapsed to reviewing code = from one side the code that my AI writes, from the other side the code that juniors' AI wrote, the amount of which has increased. And even though I like reviewing code, it feels like the hardest part of my profession and I liked it more when it was balanced with tasks which required less thinking...
I agree the info is out there about how to run effective meetings.
What percentage of software just ceases development when it's "good enough?" Probably a tiny percentage of all software, and which is mostly internal tooling.
It's going to push software development into the front of business development precisely because it will be cheaper to develop. The companies that will benefit from AI at least early on will all be software-driven companies since the results are undeniable. If software developers lose out when all of their companies become more efficient and new companies are built because it becomes easier to build software with small teams, I will be very surprised.
But there is a spectrum here. AI is a cruder, less fine-grained method of producing output. But it is a very powerful tool. Instead of "chiseling" the code line by line, it transforms relatively short prompts along with "context" into an imperfect, but much larger/fully formed product. The more you ask it to do in one go, usually the more imperfect it is. But the more precise your prompts, and the "better" your context, the more you can ask it to do while still hanging on to its "form" (always battling against the entropy of AI slop).
Incidentally, those "prompts" are the thinking. The point is to operate at the edge of LLM/machine competence. And as the LLMs become more capable, your vision can grow bigger.
Also, I find this view very selfish. Yes, let's gatekeep a ground-breaking technology because it will hurt my specific profession. That's literally what technological progress is- it always (at least temporarily) disadvantages a particular sector of workers that specialized in that skill for the benefit of society. It's just funny because usually it's the blue collar workers who have to worry about this and the white collar workers who sanctimoniously tell them to suck it up or learn to code.
Also, I can't help but chuckle slightly at the irony here. The entire purpose of software engineering and computers is automation, and the lost jobs in other sectors due to SWEs is massive. To spend one's entire life in pursuit of this and be morally opposed to the "ultimate" automation (automating this process of automation itself) is a bit rich.
Now they can have the fruits of their labor. Nothing. But why would they even want that? It’s not about getting something out of it. Or maintaining a living. It’s about building things.
The cool thing though is that I could have chosen to do something else, too. I’m not at a loss for fulfilling, productive things to do. AI won’t change that.
You get paid in the top 1% globally
You have benefits
Some hope or dreams for what to do with your future, life after work, retirement.
You get to work with other people, overseas.
Talk to those contractors sometimes. They are under tremendous pressure. They are mistreated. One wrong move, they're gone. They undergo tremendous prejudices, and soft racism everyday especially by us FTEs.
You find out that they struggle with the drudgery as well, looking for solutions, better understanding, etc.
We all feel disposable by our corporate masters, but they feel it even more so.
Be the change you want to see in the world.
The coding is the easy part.
With LLMs and advanced models, even more so.
You create the pattern. You describe the constraints. And then make it do the gruntwork.
If it's starting from nothing, and you let it be "creative" you will hate the results.
It's just a tool like any other. Hold the power drill firmly in hand and make sure you have your safety goggles on when playing with the band saw.
Chosen difficulty is a huge part of being human (music, art, athletics, games, etc.). AI hasn't taken that away.
"But I will be left behind!" No you won't. A top 20 percentile dev is not being left behind by the 80% with AI. You'll drop to average, worst case. Unironically just get good.
With historical development, investing in hypotheticals can be wasteful. Make the fewest assumptions until you get real user feedback.
With AI, we make more decisions upfront. Being wrong about those decisions has a low cost because the investment in implementation is cheap. We are less worried about building the wrong things because we can just try again quickly. It's cheap to be wrong.
The more decisions we make upfront, the more hands-off we can be. A shallow set of decisions might stall out earlier than a deeper set of decisions could. So we need to think through things more deeply. So we need to think more.
Who decides what we "need"? Software provides value, and if we want to keep providing value moving forward there will be more software
We have lots of documentation. Arguably too much - it quickly fills much of the claude opus context window with relevant documentation alone, and even then repeatedly outputs things directly counter to the documentation it just ingested.
I know that programming has gone terribly wrong, but it's hard for me to articulate how, because it's all of it - the entire frontend web development ecosystem, mobile development languages and frameworks, steep learning curve languages like Rust that were supposed to make things easier but put the onus on the developer to get the busywork right, everything basically. It's like trying to explain screws to craftsmen only familiar with nails.
In the simplest terms, it's because corporations are driving the development of those tools and vacuuming up all profits on the backs of open sources maintainers working in their parents' basements, rather than universities working from first principles to solve hard problems and give solutions away to everyone for free for the good of society. We've moved from academia to slavery and call it progress.
For OSes: POSIX, or the MSDN documentation for Windows.
Compiler bugs and OS bugs are extremely rare so we can rely on them to follow their spec.
AI bugs are very much expected when the "spec" (the prompt) is correct, and since the prompt is written using imprecise human language likely by people that are not used to writing precise specifications, the prompt is likely either mistaken or insufficiently specified.
Phillip G. Armour The Five Orders of Ignorance https://www.researchgate.net/publication/27293624_The_Five_O...
At one point, I was using Copilot very heavily. One day I lost my internet connection and tried to write code manually, but I did not even know what to write. All I had been doing was reading the code and pressing the Tab key.
When I tried to write code myself, my fingers automatically moved to the Tab key with perfect millisecond timing. That was the day I disabled AI auto-completion and decided to use only Ask mode in Copilot.
Now I only use code generation when I 100% (atleast 90%) understand what is happening in the code, or when I am working on something that will not help my future growth, like routine code at my job.
They (collectively) have thousands of employees, billions of dollars, and tons of things to play with (taste, texture, appearance, mouthfeel, snappiness, salt, sweet, sugar, packaging colours and styles, slogans, marketing, brand associations, shop placement, advertising) all you have is willpower to fight against 4 billion years of evolution telling you that fatty, sugary, energy dense food is important, and a whole pile of biases and weaknesses that you aren't really aware of and they are very aware of and trying to maximally take advantage of.
When (b) then the process can be the thing that triggers "thinking hard", and when (a) the one's mastery can be the reason one "thinks hard" when driving the agent.
Does this help TFA? Idk. Maybe if TFA can try either doing (a) a lot or (b) a lot it might. Or maybe agentic programming is going to drive out of the business those who stop thinking hard because they have agents to help them.