https://mastodon.ar.al/@aral/114160190826192080
"Coding is like taking a lump of clay and slowly working it into the thing you want it to become. It is this process, and your intimacy with the medium and the materials you’re shaping, that teaches you about what you’re making – its qualities, tolerances, and limits – even as you make it. You know the least about what you’re making the moment before you actually start making it. That’s when you think you know what you want to make. The process, which is an iterative one, is what leads you towards understanding what you actually want to make, whether you were aware of it or not at the beginning. Design is not merely about solving problems; it’s about discovering what the right problem to solve is and then solving it. Too often we fail not because we didn’t solve a problem well but because we solved the wrong problem.
When you skip the process of creation you trade the thing you could have learned to make for the simulacrum of the thing you thought you wanted to make. Being handed a baked and glazed artefact that approximates what you thought you wanted to make removes the very human element of discovery and learning that’s at the heart of any authentic practice of creation. Where you know everything about the thing you shaped into being from when it was just a lump of clay, you know nothing about the image of the thing you received for your penny from the vending machine."
There's an upside to this sort of effort too, though. You actually need to make it crystal clear what your idea is and what it is not, because of the continuous pushback from the agentic programming tool. The moment you stop pushing back, is the moment the LLM rolls over your project and more than likely destroys what was unique about your thing in the first place.
I find myself being able to reach for the things that my normal pragmatist code monkey self would consider out of scope - these are often not user facing things at all but things that absolutely improve code maintenance, scalability, testing/testability, or reduce side effects.
That's not an upside in that it's unique to LLM vs human written code. When writing it yourself, you also need to make it crystal clear. You do that in the language of implementation.
People felt (wrongly) that traditional representational forms like portraiture were threatened by photography. Happily, instead of killing any existing genres, we got some interesting new ones.
Can't speak to firmware code or complex cryptography but my hunch is if it's in it's training dataset and you know enough to guide it, it's generally pretty useful.
If you use LLMs at very high temperature with samplers which correctly keep your writing coherent (i.e. Min_p, or better like top-h, P-less decoding, etc), than "regression to the mean" literally DOES NOT HAPPEN!!!!
This is a non sequitur. Cameras have not replaced paintings, assuming this is the inference. Instead, they serve only to be an additional medium for the same concerns quoted:
The process, which is an iterative one, is what leads you
towards understanding what you actually want to make,
whether you were aware of it or not at the beginning.
Just as this is applicable to refining a software solution captured in code, just as a painter discards unsatisfactory paintings and tries again, so too is it when people say, "that picture didn't come out the way I like, let's take another one."You wouldn't have known that, going by all the bellyaching and whining from the artists of the day.
Guess what, they got over it. You will too.
Same problem with image generation (lack of support for different SDE solvers, the image version of LLM sampling) but they have different "coomer" tools, i.e. ComfyUI or Automatic1111
Presumably humanity still has room to grow and not everything is already in the training set.
Prediction is difficult, especially of the future.
> You wouldn't have known that, going by all the bellyaching and whining from the artists of the day.
> Guess what, they got over it.
You conveniently omitted my next sentence, which contradicts your position and reads thusly:
Instead, they serve only to be an additional medium for the
same concerns quoted ...
> You will too.This statement is assumptive and gratuitous.
Did you imagine yourself then, as your are now, hunched over a glowing rectangle. Demanding imperiously that the world share your contempt for the sublime. Share your jaundiced view of those that pour the whole of themselves into the act of creation, so that everyone might once again be graced with wonder anew.
I hope you can find a work of art that breaks you free of your resentment.
It would be a lot more interesting to point out the differences and similarities yourself. But then if you wanted an interesting discussion you wouldn’t be posting trite flamebait in the first place, would you?
Thoughtful retorts such as this are deserving of the same esteem one affords the "rubber v glue"[0] idiom.
As such, I must oblige.
0 - https://idioms.thefreedictionary.com/I%27m+rubber%2c+you%27r...
You’re taking a bunch of pre-built abstractions written by other people on top of what the computer is actually doing and plugging them together like LEGOs. The artificial syntax that you use to move the bricks around is the thing you call coding.
The human element of discovery is still there if a robot stacks the bricks based on a different set of syntax (Natural Language), nothing about that precludes authenticity or the human element of creation.
This actually leaves me with a lot more time to think, about what I want the UI to look like, how I'll market my software, and so on.
I thought "on-shoring" is already commonly used for the process that undos off-shoring.
The problem is rather that programmers who work on business logic often hate programmers who are actually capable of seeing (often mathematical) patterns in the business logic that could be abstracted away; in other words: many business logic programmers hate abstract mathematical stuff.
So, in my opinion/experience this is a very self-inflected problem that arises from the whole culture around business logic and business logic programming.
This rather tells that the kind of performance optimizations that you ask for are very "standard".
LLMs don’t “reason” the same way humans do. They follow text predictions based on statistical relevance. So raising the temperature will more likely increase the likelihood of unexecutable pseudocode than it would create a valid but more esoteric implementation of a problem.
I find languages like JavaScript promote the idea that of “Lego programming” because you’re encouraged to use a module for everything.
But when you start exploring ideas that haven’t been thoroughly explored already, and particularly in systems languages which are less zealous about DRY (don’t repeat yourself) methodologies, the you can feel a lot more like a sculptor.
Likewise if you’re building frameworks rather than reusing them.
So it really depends on the problems you’re solving.
For general day-to-day coding for your average 9-to-5 software engineering job, I can definitely relate to why people might think coding is basically “LEGO engineering”.
Isn't the analogy apt? You can't make a working car using a lump of clay, just a car statue, a lump of clay is already an abstraction of objects you can make in reality.
There are tips and tricks on how to manage them and not knowing them will bite you later on. Like the basic thing of never asking yes or no questions, because in some cultures saying "no" isn't a thing. They'll rather just default to yes and effectively lie than admit failure.
Correct. However, you will probably notice that your solution to the problem doesn't feel right, when the bricks that are available to you, don't compose well. The AI will just happily smash together bricks and at first glance it might seem that the task is done.
Choosing the right abstraction (bricks) is part of finding the right solution. And understanding that choice often requires exploration and contemplation. AI can't give you that.
I wonder if software creation will be in a similar place. There still might be a small market for handmade software but the majority of it will be mass produced. (That is, by LLM or even software itself will mostly go away and people will get their work done via LLM instead of "apps")
This is no different than many things. I could grow a tree and cut it into wood but I don't. I could buy wood and nails and brackets and make furniture but I don't. I instead just fill my house/apartment with stuff already made and still feel like it's mine. I made it. I decided what's in it. I didn't have to make it all from scratch.
For me, lots of programming is the same. I just want to assemble the pieces
> When you skip the process of creation you trade the thing you could have learned to make for the simulacrum of the thing you thought you wanted to make
No, your favorite movie is not crap because the creators didn't grind their own lens. Popular and highly acclaimed games not at crap because they didn't write their own physics engine (Zelda uses Havok) or their own game engine (Plenty of great games use Unreal or Unity)
What you get right now is mass replicated software, just another copy of sap/office/Spotify/whatever
That software is not made individually for you, you get a copy like millions of other people and there is nearly no market anymore for individual software.
Llms might change that, we have a bunch of internal apps now for small annoying things..
They all have there quirks, but are only accessible internally and make life a little bit easier for people working for us.
Most of them are one shot llms things, throw away if you do not need it anymore or just one shoot again
Very few people (even before LLM coding tools) actually did low level "artisanal" coding; I'd argue the vast majority of software development goes into implementing features in b2b / b2c software, building screens, logins, overviews, detail pages, etc. That requires (required?) software engineers too, and skill / experience / etc, but it was more assembling existing parts and connecting them.
Years ago there was already a feeling that a lot of software development boiled down to taping libraries together.
Or from another perspective, replace "LLM" with "outsourcing".
I'd argue that in most cases it's better to do some research and find out if a tool already exists, and if it isn't exactly how you want it... to get used to it, like one did with all other tools they used.
Counterpoint to my own counterpoint, will anyone actually (want to) read it?
counterpoint to the third degree, to loop it back around, an LLM might and I'd even argue an LLM is better at reading and ingesting long text (I'm thinking architectural documentation etc) than humans are. Speaking for myself, I struggle to read attentively through e.g. a document, I quickly lose interest and scan read or just focus on what I need instead.
The other day people were talking about metrics, the amount of lines of code people vs LLMs could output in any given time, or the lines of code in an LLM assisted application - using LOC as a metric for productivity.
But would an LLM ever suggest using a utility or library, or re-architecture an application, over writing their own code?
I've got a fairly simple application, renders a table (and in future some charts) with metrics. At the moment all that is done "by hand", last features were stuff like filtering and sorting the data. But that kind of thing can also be done by a "data table" library. Or the whole application can be thrown out in favor of a workbook (one of those data analysis tools, I'm not at home in that are at all). That'd save hundreds of lines of code + maintenance burden.
1826 - The Heliograph - 8+ hours
1839 - The Daguerreotype - 15–30 Mins
1841 - The Calotype - 1–2 Mins
1851 - Wet Plate Collodion - 2–20 Secs
1871 - The Dry Plate - < 1 Second.
So it took 45 years to perfect the process so you could take an instant image. Yet we complain after 4 years of LLMs that they're not good enough.
Frustrated rants about deliverables aside, I don't think that's the case.
Trying to find the right level is the art. Once you learn the tools of the trade and can do abstraction, it's natural to want to abstract everything. Most programmers go through such a phase. But sometimes things really are distinct and trying to find an abstraction that does both will never be satisfactory.
When building a house there are generally a few distinct trades that do the work: bricklayers, joiners, plumbers, electricians etc. You could try to abstract them all: it's all just joining stuff together isn't it? But something would be lost. The dangers of working with electricity are completely different to working with bricks. On the other hand, if people were too specialised it wouldn't work either. You wouldn't expect a whole gang of electricians, one who can only do lighting, one who can only do sockets, one who can only do wiring etc. After centuries of experience we've found a few trades that work well together.
So, yes, it's all just abstraction, but you can go too far.
That has actually been a major problem for me in the past where my core idea is too simple, and I don't give "the muse" enough time to visit because it doesn't take me long enough to build it. Anytime I have given the muse time to visit, they always have.
Discovering the right problem to solve is not necessarily coupled to being "hands on" with the "materials you're shaping".
Code that fails to execute or compile is the default expectation for me. That's why we feed compile and runtime errors back into the model after it proposes something each time.
I'd much rather the code sometimes not work than to get stuck in infinite tool calling loops.
So whether you’re writing the spec code out by hand or ask an LLM to do it is besides the point if the code is considered a means to an end, which is what the post above yours was getting at.
Skipping over that step results in a world of knock offs and product failures.
People buy Zara or H&M because they can offload the work of verifying quality to the brand.
This was a major hurdle that mass manufacturing had to overcome to achieve dominance.
I'm starting to wonder if we lose something in all this convenience. Perhaps my life is better because I cook my own food, wash my own dishes, chop my own firewood, drive my own car, write my own software. Outwardly the results look better the more I outsource but inwardly I'm not so sure.
On the subject of furnishing your house the IKEA effect seems to confirm this.
> For me, lots of programming is the same. I just want to assemble the pieces
How did those pieces came to be? By someone assembling other pieces or by someone crafting them together out of nowhere because nobody else had written them by the time?
Of course you reuse other parts and abstractions to do whatever things that you're not working on but each time you do something that hasn't been done before you can't but engage the creative process, even if you're sitting on top of 50 years worth of abstractions.
In other words, what a programmer essentially has is a playfield. And whether the playfield is a stack of transistors or coding agents, when you program you create something new even if it's defined and built in terms of the playfield.
I can do some crud apps where it's just data input to data store to output with little shaping needed. Or I can do apps where there's lots of filters, actions and logic to happen based on what's inputted that require some thought to ensure actually solve the problem it's proposed for.
"Shaping the clay" isn't about the clay, it's about the shaping. If you have to make a ball of clay and also have to make a bridge of Lego a 175kg human can stand on, you'll learn more about Lego and building it than you will about clay.
Get someone to give you a Lego instruction sheet and you'll learn far less, because you're not shaping anymore.
If you just chuck ideas at the external coding team/tool you often get rubbish back.
If you're good at managing the requirements and defining things well you can achieve very good things with much less cost.
Obviously I am not comparing his final product with my code, I am simply pointing out how this metaphor is flawed. Having "workers" shape the material according to your plans does not reduce your agency.
The humane and the machinic need to meet halfway - any computing endeavor involves not only specifying something clearly enough for a computer to execute it, but also communicating to humans how to benefit from the process thus specified. And that's the proper domain not only of software engineering, but the set of related disciplines (such as the various non-coding roles you'd have in a project team - if you have any luck, that is).
But considering the incentive misalignments which easily come to dominate in this space even when multiple supposedly conscious humans are ostensibly keeping their eyes on the ball, no matter how good the language machines get at doing the job of any of those roles, I will still intuitively mistrust them exactly as I mistrust any human or organization with responsibly wielding the kind of pre-LLM power required for coordinating humans well enough to produce industrial-scale LLMs in the first place.
What's said upthread about the wordbox continually trying to revert you to the mean as you're trying to prod it with the cowtool of English into outputting something novel, rings very true to me. It's not an LLM-specific selection pressure, but one that LLMs are very likely to have 10x-1000xed as the culmination of a multigenerational gambit of sorts; one whose outset I'd place with the ever-improving immersive simulations that got the GPU supply chain going.
Not being hands-on, and more important not LISTENING to the hands-on people and learning from them, is a massive issue in my surroundings.
So thinking hard on something is cool. But making it real is a whole different story.
Note: as Steve used to say, "real artists ship".
So you can just, like, tweak it when it's working against your intent in either direction?
I don't allow my agent to write any code. I ask it for guidance on algorithms, and to supply the domain knowledge that I might be missing. When using it for game dev for example, I ask it to explain in general terms how to apply noise algorithms for procedural generation, how to do UV mapping etc, but the actual implementation in my language of choice is all by hand.
Honestly, I think this is a sweet spot. The amount of time I save getting explanations of concepts that would otherwise get a bit of digging to get is huge, but I'm still entirely in control of my codebase.
If you are outsourcing to an LLM in this case YOU are still in charge of the creative thought. You can just judge the output and tune the prompts or go deep in more technical details and tradeoffs. You are "just" not writing the actual code anymore, because another layer of abstraction has been added.
That description is NOT coding, coding is a subset of that.
Coding comes once you know what you need to build, coding is the process of you expressing that in a programming language and as you do so you apply all your knowledge, experience and crucially your taste, to arrive at an implementation which does what's required (functionally and non-functionally) AND is open to the possibility of change in future.
Someone else here wrote a great comment about this the other day and it was along the lines of if you take that week of work described in the GP's comment, and on the friday afternoon you delete all the code checked in. Coding is the part to recreate the check in, which would take a lot less than a week!
All the other time was spent turning you into the developer who could understand why to write that code in the first place.
These tools do not allow you to skip the process of creation. They allow you to skip aspects of coding - if you choose to, they can also elide your tastes but that's not a requirement of using them, they do respond well to examples of code and other directions to guide them in your tastes. The functional and non-functional parts they're pretty good at without much steering now but i always steer for my tastes because, e.g. opus 4.5 defaults to a more verbose style than i care for.
Instead of pouring all of your efforts into making one single static object with no moving parts, you can simply specify the individual parts, have the machine make them for you, and pour your heart and soul into making a machine that is composed of thousands of parts, that you could never hope to make if you had to craft each one by hand from clay.
We used to have a way to do this before LLMs, of course: we had companies that employed many people, so that the top level of the company could simply specify what they wanted, and the lower levels only had to focus on making individual parts.
Even the person making an object from clay is (probably) not refining his own clay or making his own oven.
Maybe, but beware assuming you could do something you haven't actually tried to do.
Everything is easy in the abstract.
Or, it's like trying to make a MacBook Pro by buying electronics boards from AliExpress and wiring them together.
But Pulp Fiction would not have been a masterpiece if Tarantino just typed “Write a gangster movie.” into a prompt field.
It seems to assume that vibe coding or like whatever you call the Gas Town model of programming is the only option, but you don't have to do that. You don't have to specify upfront what you want and then never change or develop that as you go through the process of building it, and you don't have to accept whatever the AI gives you on the other end as final.
You can explore the affordances of the technologies you're using, modify your design and vision for what you're building as you go; if anything, I've found AI coding mix far easier to change and evolve my direction because it can update all the various parts of the code that need to be updated when I want to change direction as well as keeping the tests and specification and documentation in sync, easily and quickly.
You also don't need to take the final product as a given, a "simulacrum delivered from a vending machine": build, and then once you've gotten something working, look at it and decide that it's not really what you want, and then continue to iterate and change and develop it. Again, with AI coding, I've found this easier than ever because it's easier to iterate on things. The process is a bit faster for not having to move the text around and looking up API documentation myself, even though I'm directly dictating the architecture and organization and algorithms and even where code should go most of the time.
And with the method I'm describing, where you're in the code just as much as the AI is, just using it to do the text/API/code munging, you can even let the affordances of not just the technologies, but the source code and programming language itself effect how you do this: if you care about the code quality and clarity and organization of the code that the AI is generating, you'll see when it's trying to brute force its way past technical limitations and instead redirect it to follow the grain. It just becomes easier and more fluid to do that.
If anything, AI coding in general makes it easier to have a conversation with the machine and its affordances and your design vision and so on, then before because it becomes easier to update everything and move everything around as your ideas change.
And nothing about it means that you need to be ignorant of what's going on; ostensibly you're reviewing literally every line of code it creates and deciding what libraries and languages as well as the architecture, organization and algorithms it's using. You are aren't you? So you should know everything you need to know. In fact, I've learned several libraries and a language just from watching it work, enough that I can work with them without looking anything up, even new syntax and constructs that would have been very unfamiliar prior on my manual coding days.
But these arguments and the OP's article do reinforce that AI rots brains. Even my sparing use of googles gemini and my interaction with the bots here have really dinged my ability to do simple math.
It's amazing what one competent developer can do, and it's amazing how little a hundred devs end up actually doing when weighed down by beaurocracy. And lets not pretend even half of them qualify as competent, not to mention they probably don't care either. They get to work and have a 45 min coffee break, move some stuff around in the Kanban board, have another coffee break, then lunch, then foosball etc. Ad when they actually write some code it's ass.
And sure, for those guys maybe LLMs represent a huge productivity boost. For me it's usually faster to do the work myself than to coax the bot into creating something acceptable.
The LLM adding a bunch of extra formatting to add emphasis and structure to what might have originally been a bit of a ramble, but obviously human written. The comments absolutely lambasted this OP for being a hypocrite complaining about their team using AI, but then seeing little problem with posting what is obviously an AI generated question because the OP didn't deem their English skills good enough to ask the question directly.
I'm not going to pass judgement on this scenario, but I did think the entire encounter was a "fun" anecdote in addition to your comments.
Edit: wrods
As someone that started with Machine Code, I'm grateful for compiled -even interpreted- languages. I can’t imagine doing the kind of work that I do, nowadays, in Machine Code.
I’m finding it quite interesting, using LLM-assisted development. I still need to keep an eye on things (for example, the LLM tends to suggest crazy complex solutions, like writing an entire control from scratch, when a simple subclass, and five lines of code, will work much better), but it’s actually been a great boon.
I find that I learn a lot, using an LLM, and I love to learn.
Also the code is not a means to an end. It’s going to be run somewhere doing stuff someone wants to do reliably and precisely. The overall goal was ever to invest some programmer time and salary in order to free more time for others. Not for everyone to start babysitting stuff.
I think there needs to be a sea change in the current LLM tech to make that no longer the case - either massively increased context sizes, so they can contain near a career worth of learning (without the tendency to start ignoring that context, as the larger end of the current still-way-too-small-for-this context windows available today), or even allow continuous training passes to allow direct integration of these "learnings" into the weights themselves - which might be theoretically possible today, but is many orders of magnitude higher in compute requirements than available today even if you ignore cost.
The biggest lesson I am learning recently is that technologists will bend over backwards to gaslight the public to excuse their own myopia.
If you really care about using the hardware effectively, optimizing the code is so much more than what you describe.
We’ve created formal notation to shorten writing. And computation is formal notation that is actually useful. Why write pages of specs when I could write a few lines of code?
Sometimes you want a utilitarian teapot to reliably pour a cup of tea.
The materials and rough process for each can be very similar. One takes a master craftsman and a lot of time to make and costs a lot of money. The other can be made on a production line and the cost is tiny.
Both have are desirable, for different people, for different purposes.
With software, it's similar. A true master knows when to get it done quick and dirty and when to take the time to ponder and think.
Your contrast is an either or, that - in the real world - does not exist.
Take content written by AI, prompted by a human. A lot of it is slop and crap. And there will be more slop and crap with AI than before. But that was the case, when the medium changed from hand writen to printed books. And when paper and printing became cheap, we had slop like those 10 Cent Western or Romance novellas.
We also still had Goethe, still had Kleist, still had Grass (sorry, very German centric here).
We also have Inception vs. the latest sequel of any Marvel franchise.
I have seen AI writen, but human prompted short stories, that made people well up and find ideas presented in a light not seen before. And I have seen AI generated stories that one wants to purge from my brain.
It isn't the tool - it is the one yielding it.
Question: Did photoshop kill photography? Because honestly, this AI discussion to me sounds very much like the discussion back then.
I think the biggest beef I have with Engineers is that for decades they more or less reduced the value of other lumps of clay and now want to throw up arms when its theirs.
If you pardon the analogy, watch how Japanese make a utilitarian teapot which reliably pours a cup of tea.
It's more complicated and skill-intensive than it looks.
In both realms, making an artistic vase can be simpler than a simple utilitarian tool.
AI is good at making (poor quality, arguably) artistic vases via its stochastic output, not highly refined, reliable tools. Tolerances on these are tighter.
As someone that’s a bit of a fence-sitter on the matter of AI, I feel that using it in the way that OP did is one of the less harmful or intrusive uses.
Because everyone under him knows that a mistake big enough is a quick way to unemployment or legal actions. So the whole team is pretty much aligned. A developer using an LLM may as well try to herd cats.
I think there are just a class of people know that think that you cannot get 'macbook' quality with a LLM. I don't know why I try to convince them, it's not in my benefit.
There is a difference between cooking and putting a ready meal into the microwave.
Both satisfy your hunger but only one can give some kind of pride.
"Write a gangster movie that I like", instead of "...a movie this other guy likes".
But because this is not the case, we appreciate Tarantino more than we appreciate gangster movies. It is about the process.
With LLMs and engineers often being forced by management to use them, everyone is pushed to become like the second group, even though it goes against their nature. The former group see the part as a means, whereas the latter view it as the end.
Some people love the craft itself and that is either taken away or hollowed out.
It killed an aspect of it. The film processing in the darkroom. Even before digital cameras were ubiquitous it was standard to get a scan before doing any processing digitally. Chemical processing was reduced the minimum necessary.
Yes, some things are better when manufactured in highly automated ways (like computer chips), but their design has been thoroughly tested and before shipping the chips themselves go through lots of checks to make sure they are correct. LLM code is almost never treated that way today.
I think this makes a perfect counter-example. Because this structure is an important reason for YC to exist and what the HN crowd often rallies against.
Such large companies - generally - don't make good products. Large companies rarely make good products in this way. Most, today, just buy companies that built something in the GP's cited vein: a creative process, with pivots, learnings, more pivots, failures or - when successful - most often successful in an entirely different form or area than originally envisioned. Even the large tech monopolies of today originated like that. Zuckerberg never envisioned VR worlds, photo-sharing apps, or chat apps, when he started the campus-fotobook-website. Bezos did not have some 5d-chess blueprint that included the largest internet-infrastructure-for-hire when he started selling books online.
If anything, this only strengthens the point you are arguing against: a business that operates by a "head" "specifying what they want" and having "something" figure out how to build the parts, is historically a very bad and inefficient way to build things.
Doesn’t that prove the point? You could do that right now, and it would be absolute trash. Just like how right now we are nowhere close to being able to make great software with a single prompt.
I’ve been vibecoding a side project and it has been three months of ideating, iterating, refining and testing. It would have taken me immeasurably longer without these tools, but the end result is still 100% my vision, and it has been a tremendous amount of work.
The tools change, but the spirit only grows.
If it takes you more than a few seconds or so to understand code an agent generated you’re going to make mistakes. You should know exactly what it’s going to produce before it produces it.
So that Excel spreadsheet that manages the entire sales funnel?
Some commentators dismissed this trend towards photography as simply a beneficial weeding out of second-raters. For example, the writer Louis Figuier commented that photography did art a service by putting mediocre artists out of business, for their only goal was exact imitation. Similarly, Baudelaire described photography as the “refuge of failed painters with too little talent”. In his view, art was derived from imagination, judgment and feeling but photography was mere reproduction which cheapened the products of the beautiful [23].
https://www.artinsociety.com/pt-1-initial-impacts.html#:~:te...
I’m no less proud of what I built in the last three weeks using three terminal sessions - one with codex, one with Claude, and one testing everything from carefully designed specs - than I was when I first booted a computer, did “call -151” to get to the assembly language prompt on my Apple //e in 1986.
The goal then was to see my ideas come to life. The goal now is to keep my customers happy, get projects done on time, on budget and meets requirements and continue to have my employer put cash in my account twice a month - and formerly put AMZN stock in my brokerage account at vesting.
Artisans in Japan might go to incredible lengths to create utilitarian teapots. Artisans who graduated last week from a 4-week pottery workshop will produce a different kind quality, albeit artisan. $5.00 teapots from an East Asian mass production factory will be very different than high quality mass-produced upmarket teapots at a higher price. I have things in my house that fall into each of those categories (not all teapots, but different kinds of wares).
Sometimes commercial manufacturing produces worse tolerances than hand-crafting. Sometimes, commercial manufacturing is the only way to get humanly unachievable tolerances.
You can't simplify it into "always" and "never" absolutes. Artisan is not always nicer than commercial. Commercial is not always cheaper than artisan. _____ is not always _____ than ____.
If we bring it back to AI, I've seen it produce crap, and I've also seen it produce code that honestly impressed me (my opinion is based on 24 years of coding and engineering management experience). I am reluctant to make a call where it falls on that axis that we've sketched out in this message thread.
One must be conversant in abstractions that are themselves ephemeral and half hallucinated. It's a question of what to cling to, what to elevate beyond possible hallucinated rubbish. At some level it's a much faster version of the meastspace process and it can be extermely emotionally uncomfortable and anarchic to many.
"Most developers don't know the assembly code of what they're creating. When you skip assembly you trade the very thing you could have learned to fully understand the application you were trying to make. The end result is a sad simulacrum of the memory efficiency you could have had."
This level of purity-testing is shallow and boring.
Its bleak out there.
Now, the only reason I code and have been since the week I graduated from college was to support my insatiable addictions to food and shelter.
While I like seeing my ideas come to fruition, over the last decade my ideas were a lot larger than I could reasonably do over 40 hours without having other people working on projects I lead. Until the last year and a half where I could do it myself using LLMs.
Seeing my carefully designed spec that includes all of the cloud architecture get done in a couple of days - with my hands on the wheel - that would have taken at least a week with me doing some work while juggling dealing with a couple of other people - is life changing
When you do this with an outsourced team, it can happen at most once per sprint, and with significant pushback, because there's a desire for them to get paid for their deliverable even if it's not what you wanted or suffers some other fundamental flaw.
Most of the OP article also resonated with me as I bounce back and forth between learning (consuming, thinking, pulling, integrating new information) to building (creating, planning, doing) every few weeks or months. I find that when I'm feeling distressed or unhappy, I've lingered in one mode or the other a little too long. Unlike the OP, I haven't found these modes to be disrupted by AI at all, in fact it feels like AI is supporting both in ways that I find exhilarating.
I'm not sure OP is missing anything because of AI per se, it might just be that they are ready to move their focus to broader or different problem domains that are separate from typing code into an IDE?
For me, AI has allowed me to probe into areas that I would have shied away from in the past. I feel like I'm being pulled upward into domains that were previously inaccessible.
I use Claude on a daily basis, but still find myself frequently hand-writing code as Claude just doesn't deliver the same results when creating out of whole cloth.
Claude does tend to make my coarse implementations tighter and more robust.
I admittedly did make the transition from software only to robotics ~6 years ago, so the breadth of my ignorance is still quite thrilling.
Which spec? Is there a spec that says if you use a particular set of libraries you’d get less than 10 millisecond response? You can’t even know that for sure if you roll your own code, with no 3rd party libraries.
Bugs are by definition issues arise when developers expect they code to do one thing, but it does another thing, because of unforeseen combination of factors. Yet we all are ok with that. That’s why we accept AI code. They work well enough.
Your work is influenced by the medium by which you work. I used to be able to tell very quickly if a website was developed in Ruby on Rails, because some approaches to solve a problem are easy and some contain dragons.
If you are coding in clay, the problem is getting turned into a problem solvable in clay.
The challenge if you are directing others (people or agents) to do the work is that you don't know if they are taking into account the properties of the clay. That may be the difference between clean code - and something which barely works and is unmaintainable.
I'd say in both cases of delegation, you are responsible for making sure the work is done correctly. And, in both cases, if you do not have personal experiences in the medium you may not be prepared to judge the work.
But at that point, will I even have the ability to distinguish a good solution from a bad one? How would I know, if I’ve been relying on AI to evaluate if ideas are good or not? I’d just be pushing mediocre solutions off as my own, without even realising that they’re mediocre.
Hence why a lot of software development is gluing libraries together these days.
Then I tried to push through 50000 documents, it crashed and burned like I suspected. It took one day to go from my second more complicated but more scalable spec where I didn’t depend on an AWS managed service to working scalable code.
It would have taken me at least a week to do it myself
I so severely doubt this to the point I'd say this statement is false.
As we go toward the past art was expensive and rare. Better quality landscape/portraits were exceptionally rare and really only commissioned by those with money, which again was a smaller portion of the population in the time before cameras. It's likely there are more high quality paintings now per capita than there were ever in the past, and the issue is not production, but exposure to the high quality ones. Maybe this is what you mean by 'miss out'?
In addition the general increase in wealth coupled with the cost of art supplies dropping this opens up a massive room for lower quality art to fill the gap. In the past canvas was typically more expensive so sucky pictures would get painted over.
Just like people more, and have better meetings.
Life is what you make it.
Enjoy yourself while you can.
That said, the framing feels a bit too poetic for engineering. Software isn't only craft, it's also operations, risk, time, budget, compliance, incident response, and maintenance by people who weren't in the room for the "lump of clay" moment. Those constraints don't make the work less human; they just mean "authentic creation" isn't the goal by itself.
For me the takeaway is: pursue excellence, but treat learning as a means to reliability and outcomes. Tools (including LLMs) are fine with guardrails, clear constraints up front and rigorous review/testing after, so we ship systems we can reason about, operate, and evolve (not just artefacts that feel handcrafted).
They’re destroying the only thing I like about my job - figuring problems out. I have a fundamental impedance mismatch with my company’s desires, because if someone hands me a weird problem, I will happily spend all day or longer on that problem. Think, hypothesize, test, iterate. When I’m done, I write it up in great detail so others can learn. Generally, this is well-received by the engineer who handed the problem to me, but I suspect it’s mostly because I solved their problem, not because they enjoyed reading the accompanying document.
I wholeheartedly disagree but I tend to believe that's going to be highly dependent on what type of developer a person is. One who leans towards the craftsmanship side or one who leans towards the deliverables side. It will also be impacted by the type of development they are exposed to. Are they in an environment where they can even have a "lump of clay" moment or is all their time spent on systems that are too old/archaic/complex/whatever to ever really absorb the essence of the problem the code is addressing?
The OP's quote is exactly how I feel about software. I often don't know exactly what I'm going to build. I start with a general idea and it morphs towards excellence by the iteration. My idea changes, and is sharpened, as it repeatedly runs into reality. And by that I mean, it's sharpened as I write and refactor the code.
I personally don't have the same ability to do that with code review because the amount of time I spend reviewing/absorbing the solution isn't sufficient to really get to know the problem space or the code.
When I play sudoku with an app, I like to turn on auto-fill numbers, and auto-erase numbers, and highlighting of the current number. This is so that I can go directly to the crux of the puzzle and work on that. It helps me practice working on the hard part without having to slog through the stuff I know how to do, and generally speaking it helps me do harder puzzles than I was doing before. BTW, I’ve only found one good app so far that does this really well.
With AI it’s easier to see there are a lot of problems that I don’t know how to solve, but others do. The question is whether it’s wasteful to spend time independently solving that problem. Personally I think it’s good for me to do it, and bad for my employer (at least in the short term). But I can completely understand the desire for higher-ups to get rid of 90% of wheel re-invention, and I do think many programmers spend a lot of time doing exactly that; independently solving problems that have already been solved.
With music this is much more pronounced because most people are musically illiterate, so even the basic mistakes while dragging characteristics over diffs becomes invisible. It's an interesting phenomenon I agree, but it says more about lack of taste and illiteracy of the common individual.
But on the point of "thinking hard", with music and artistic production in general, individuals (human with soul, not npc) crave for ideas and perspective. It is the play, the relationship between ideas that are hard to vocalize and describe but can be provocative. Because we cannot describe or understand, we have no choice other than provoke into another a similar contemplation.
But make no mistake, nobody is enjoying llm slop. They have fantasies that now they can produce something of value, or delegate this production. If this becomes true, instantly they lose and everyone goes directly to the source.
Art is specifically about communicating the inconceivable, cannot be delegated. If the tool is sufficient to produce art, then the expression is of the tool itself, now they ARE.
You can imagine the artisans who made shirts saying the exact same thing as the first textile factories became operational.
Humans have been coders in the sense we mean for a matter of decades at most - a blip in our existence. We’re capable of far more, and this is yet another task we should cast into the machine of automation and let physical laws do the work for us.
We’re capable of manipulating the universe into doing our bidding, including making rocks we’ve converted into silicones think on our behalf. Making shirts and making code: we’re capable of so much more.
There can be. But you’d have to map the libraries to opcodes and then count the cycles. That’s what people do when they care about that particular optimization. They measure and make guaranties.
Isn't this also an overstatement, and the problem is worse. That is - the code being handed back is a great prototype, needs polishing/finishing, and is ignorant of obvious implicit edge cases unless you explicitly innumerate all of them in your prompts??
For me, the state of things reminds me of a bad job I had years ago.
Worked with a well-regarded long tenured but truculent senior engineer who was immune to feedback due to his seniority. He committed code that either didn't run, didn't past tests, or implemented only the most obvious happy path robotically literal interpretation of requirements.
He was however, very very fast... underbidding teammates on time estimates by 10x.
He would hand back the broken prototype and we'd then spend the 10x time making his code actually something you can run in production.
Management kept pushing this because he had a great reputation, promised great things, and every once in a while did actually deliver stuff fast. It took years for management to come around to the fact that this was not working.
But it is also true that most programming tedious and hardly enriching for the mind. In those cases, LLMs can be a benefit. When you have identified the pattern or principle behind a tedious change, an LLM can work like a junior assistant, allowing you to focus on the essentials. You still need to issue detailed and clear instructions, you still need to verify the work.
Of course, the utility of LLMs is a signal that either the industry is bad at abstracting, or that there's some practical limit.
If I wanted to work on electric power systems I would have become an electrician.
(The transition is happening.)
For instance, I think about the pervasive interstate overpass bridge. There was a time long ago when building bridges was a craft. But now I see like ten of these bridges every day, each of which is better - in the sense of how much load they can support and durability and reliability - than the best that those craftsmen of yore could make.
This doesn't mean I'm in any way immune to nostalgia. But I try to keep perspective, that things can be both sad and ultimately good.
I personally think that we're not done evolving really, and to call it quits today would leave alot of efficiency and productivity on the table
The risk of LLMs laying more of these bricks isn't just loss of authenticity and less human elements of discovery and creation, it's further down the path of "there's only one instruction manual in the Lego box, and that's all the robots know and build for you". It's an increased commodification of a few legacy designers' worth of work over a larger creative space than at first seems apparent.
The hard problems should be solved with our own brains, and it behooves us to take that route so we can not only benefit from the learnings, but assemble something novel so the business can differentiate itself better in the market.
For all the other tedium, AI seems perfectly acceptable to use.
Where the sticking point comes in is when CEOs, product teams, or engineering leadership put too much pressure on using AI for "everything", in that all solutions to a problem should be AI-first, even if it isn't appropriate—because velocity is too often prioritized over innovation.
And worse: with few opportunities to grow their skills from rigorous thinking as this blog post describes. Tech workers will be relegated to cleaning up after sloppy AI codebases.
Granted, you would learn a lot more if you had pieced your ideas together manually, but it all depends on your own priorities. The difference is, you're not stuck cleaning up after someone else's bad AI code. That's the side to the AI coin that I think a lot of tech workers are struggling with, eventually leading to rampant burnout.
Software developers can use the exact same "lego block" abstractions ("this code just multiplies two numbers") and tell very different stories with it ("this code is the formula for force power", "this code computes a probability of two events occurring", "this code gives us our progress bar state as the combination of two sub-processes", etc).
LLMs have only so many "stories" they are trained on, and so many ways of thinking about the "why" of a piece of code rather than mechanical "what".
Will a company pay me more for knowing those details? Will I be more affectively able to architect and design solutions that a company will pay my employer to contract me to do and my company pays me? They pay me decently not because I “codez real gud”. They pay me because I can go from empty AWS account, empty repo and ambiguous customer requirements to a working solution (after spending time talking to a customer) to a full well thought out architecture + code on time on budget and that meets requirements.
I am not bragging, I’m old those are table stakes to being able to stay in this game for 3 decades
So, tackle other problems. You can now do things you couldn't even have contemplated before. You've been handed a near-godlike power, and all you can do is complain about it?
Do we? I don't think people appreciate tarantino more than gangster movies. Don't think people appreciate tarantino more than pulp fiction. Frankly, tarantino doesn't factor in at all.
> It is about the process.
I never considered the process when watching pulp fiction. It's the finished product, not the process, that matters.
Put it this way, we know who tarantino is because of pulp fiction. Not the other way around.
Software engineering is all about making sure the what actually solves the why, making the why visible enough in the what so that we can modify the latter if the former changes (it always does).
Current LLM are not about transforming a why into a what. It’s about transforming an underspecified what into some what that we hope fits the why. But as we all know from the 5 Why method, why’s are recursive structure, and most software engineer is about diving into the details of the why. The what are easy once that done because computers are simple mechanisms if you chose the correct level of abstraction for the project.
One of the reasons Barry Lyndon is over 50 years old and still looks like no other movie today is because Kubrick tracked down a few lenses originally designed for NASA and had custom mounts built for them to use with cinema cameras.
https://neiloseman.com/barry-lyndon-the-full-story-of-the-fa...
> Popular and highly acclaimed games not at crap because they didn't write their own physics engine (Zelda uses Havok)
Super Mario Bros is known for having a surprisingly subtle and complex physics system that enabled the game to feel both challenging and fair even for players very new to consoles. Celeste a newer game also famous for being very difficult yet not feeling punishing does something similar:
https://maddymakesgames.com/articles/celeste_and_towerfall_p...
> or their own game engine (Plenty of great games use Unreal or Unity)
And Minecraft doesn't, which is why few other games at the time of its release felt and played like it.
You're correct that no one builds everything from scratch all the time. However, if all you ever do is cobble a few pre-made things together, I think you'll discover that nothing you make is ever that interesting or enduring in value. Sure, it can be useful, and satisfying. But the kinds of things that really leave a mark on people, that affect them deeply, always have at least some aspect where the creator got obsessive and went off the deep end and did their own thing from scratch.
Further, you'll never learn what a transformative experience it can be to be that creator who gets obsessive about a thing. You'll miss out on discovering the weird parts of your own soul that are more fascinated by some corner of the universe than anyone else is.
I have a lot of regrets in my life, but I don't regret the various times I've decided I've deeply dug into some thing and doing it from scratch. Often, that has turned out later to be some of the most long-term useful things I've done even though it seemed like a selfish indulgence at the time.
Of course, it's your life. But consider that there may be a hidden cost to always skimming along across the tops of the stacks of things that already exist out there. There is growth in the depths.
If you are a cook wanting to open a restaurant, you will be delegating, the same thing with AI. If you are fine only doing what your hands can possibly do in the time allotted, go ahead and cook in your kitchen.
But I need to make money to be able to trade for the food I eat.
Harvard Business Review and probably hundreds of other online content providers provide some simple rules for meetings yet people don't even do these.
1. Have a purpose / objective for the meeting. I consider meetings to fall into one of three broad categories information distribution, problem solving, decision making. Knowing this will allow the meeting to go a lot smoother or even be moved to something like an email and be done with it.
2. Have an agenda for the meeting. Put the agenda in the meeting invite.
3. If there are any pieces of pre-reading or related material to be reviewed, attach it and call it out in the invite. (But it's very difficult to get people to spend the time preparing for a meeting.)
4. Take notes during the meeting and identify any action items and who will do them (preferably with an initial estimate). Review these action items and people responsible in the last couple of minutes of the meeting.
5. Send out the notes and action items.
Why aren't we doing these things? I don't know, but I think if everyone followed these for meetings of 3+ people, we'd probably see better meetings.
That’s the whole point. You become a customer of an AI service, you get what you want but it wasn’t done by you. You get money but not the feeling of accomplishment from cracking a problem. Like playing a video game following a solution or solving a crossword puzzle with google.
I'm trying my best to adapt to being a "centaur" in this world. (In Chess it has become statistically evident that Human and Bot players of Chess are generally "worse" than the hybrid "Centaur" players.) But even "centaurs" are going to be increasingly taken for granted by companies, and at least for me the sense is growing that as WOPR declared about tic-tac-toe (and thermo-nuclear warfare) "a curious game, the only way to win is not to play". I don't know how I'd bootstrap an entirely new career at this point in my life, but I keep feeling like I need to try to figure that out. I don't want to just be a janitor of other people's messes for the rest of my life.
It isn't all great, skills that feel important have already started atrophying, but other skills have been strengthened. The hardest part is in being able to pace onself as well as figuring out how to start cracking certain problems.
AI assistance in programming is a service, not a tool. You are commissioning Anthropic, OpenAI, etc. to write the program for you.
This seems to be a common narrative, but TBH I don't really see it. Where is all the amazing output from this godlike power? It certainly doesn't seem like tech is suddenly improving at a faster pace. If anything, it seems to be regressing in a lot of cases.
I think the point is that the finished product depends on the process.
And don’t forget, it’s more likely to find someone cheaper who can write the same prompts as you than people with the same kind of experience in cracking problems.
It's a carved wooden dragon that my dad got from Indonesia (probably about 50 years ago).
It's hard to appreciate, if you aren't holding it, but it weighs a lot, and is intricately carved, all over.
I guarantee that the carver used a Dremel.
I still have a huge amount of respect for their work. That wood is like rock. I would not want to carve it with hand tools.
There's just some heights we can't reach, without a ladder.
Aren't Legos known for their ability to enable creativity and endless possibilities? It doesn't feel that different from the clay analogy, except a bit coarser grained.
With an LLM, you put in a high-level description, and then check in the "machine code" (generated code).
Why not just use a library at that point? We already have support for abstractions in programming.
Those types of developers on the enterprise dev side - where most developers work - were becoming a commodity a decade ago and wages have been basically stagnant. Now those types of developers are finding it hard to stand out and get noticed.
The trick is to move “up the stack” and closer to the customer whether that be an internal customer or external customer and be able to work at a higher level of scope, impact and ambiguity.
https://www.levels.fyi/blog/swe-level-framework.html
It’s been well over a decade and 6 jobs ago that I had to do a coding interview to prove I was able “to codez real gud”, every job I’ve had since then has been more concerned with whether I was “smart and get things done”. That could mean coding, leading teams, working with “the business”, being on Zoom calls with customers, flying out to the customers site, or telling a PE backed company with low margins that they didn’t need a team of developers, they needed to outsource complete implementations to other companies.
I’ve always seen coding as grunt work. But the only way to go from requirements -> architectural vision -> result and therefore getting money in my pocket.
My vision was based on what I could do myself in the allotted time at first and then what I could do with myself + leading a team. Now it’s back to what I can do by myself + Claude Code and Codex.
As far as the first question, my “fun” during my adult life has come from teaching fitness classes until I was 35 and running with friends in charity races on the weekend, and just hanging out, spending time with my (now grown) stepsons after that and for the past few years just spending time with my wife and traveling, concerts, some “digital nomadding” etc
That's how I have been using AI the entire time. I do not use Claude Code or Codex. I just use AI to ask questions instead of parsing the increasingly poor Google search results.
I just use the chat options in the web applications with manual copy/pasting back and forth if/when necessary. It's been wonderful because I feel quite productive, and I do not really have much of an AI dependency. I am still doing all of my work, but I can get a quicker answer to simple questions than parsing through a handful of outdated blogs and StackOverflow answers.
If I have learned one thing about programming computers in my career, it is that not all documentation (even official documentation) was created equally.
I haven’t counted cycles since programming assembly on a 65C02 where you cooks save a clock cycle by accessing memory in the first page of memory - two opcodes to do LDA $02 instead of LDA $0201
I agree the info is out there about how to run effective meetings.
You get paid in the top 1% globally
You have benefits
Some hope or dreams for what to do with your future, life after work, retirement.
You get to work with other people, overseas.
Talk to those contractors sometimes. They are under tremendous pressure. They are mistreated. One wrong move, they're gone. They undergo tremendous prejudices, and soft racism everyday especially by us FTEs.
You find out that they struggle with the drudgery as well, looking for solutions, better understanding, etc.
We all feel disposable by our corporate masters, but they feel it even more so.
Be the change you want to see in the world.
The coding is the easy part.
With LLMs and advanced models, even more so.
We have lots of documentation. Arguably too much - it quickly fills much of the claude opus context window with relevant documentation alone, and even then repeatedly outputs things directly counter to the documentation it just ingested.
For OSes: POSIX, or the MSDN documentation for Windows.
Compiler bugs and OS bugs are extremely rare so we can rely on them to follow their spec.
AI bugs are very much expected when the "spec" (the prompt) is correct, and since the prompt is written using imprecise human language likely by people that are not used to writing precise specifications, the prompt is likely either mistaken or insufficiently specified.
Phillip G. Armour The Five Orders of Ignorance https://www.researchgate.net/publication/27293624_The_Five_O...
Gone are the days of hopeless Googling where 20 minutes of research becomes 3 hours with the possibility of having zero solutions. The sheer efficiency of having reliable, immediate answers is a tremendous improvement, even if you're choosing to write everything by hand using it as a reference.
Gladly! I think what I would choose is building on-shore teams exclusively. That's the change I'd like to see more of, while overseas teams build their own economies instead of ripping away jobs from domestic citizens in an already difficult job market.