I started out very sceptical. When Claude Code landed, I got completely seduced — borderline addicted, slot machine-style — by what initially felt like a superpower. Then I actually read the code. It was shockingly bad. I swung back hard to my earlier scepticism, probably even more entrenched than before.
Then something shifted. I started experimenting. I stopped giving it orders and began using it more like a virtual rubber duck. That made a huge difference.
It’s still absolute rubbish if you just let it run wild, which is why I think “vibe coding” is basically just “vibe debt” — because it just doesn’t do what most (possibly uninformed) people think it does.
But if you treat it as a collaborator — more like an idiot savant with a massive brain but no instinct or nous — or better yet, as a mech suit [0] that needs firm control — then something interesting happens.
I’m now at a point where working with Claude Code is not just productive, it actually produces pretty good code, with the right guidance. I’ve got tests, lots of them. I’ve also developed a way of getting Claude to document intent as we go, which helps me, any future human reader, and, crucially, the model itself when revisiting old code.
What fascinates me is how negative these comments are — how many people seem closed off to the possibility that this could be a net positive for software engineers rather than some kind of doomsday.
Did Photoshop kill graphic artists? Did film kill theatre? Not really. Things changed, sure. Was it “better”? There’s no counterfactual, so who knows? But change was inevitable.
What’s clear is this tech is here now, and complaining about it feels a bit like mourning the loss of punch cards when terminals showed up.
[0]: https://matthewsinclair.com/blog/0178-why-llm-powered-progra...
This is my experience in general. People seem to be impressed by the LLM output until they actually comprehend it.
The fastest way to have someone break out of this illusion is tell them to chat with the LLM about their own expertise. They will quickly start to notice errors in the output.
When it works it’s brilliant.
There is a threshold point as part of the learning curve where you realize you are in a pile of spaghetti code and think it actually saves no time to use LLM assistant.
But then you learn to avoid the bad parts - thus they don’t take your time anymore - and the good parts start paying back in heaps of the time spent learning.
They are not zero effort tools.
There is a non-trivial learning cost involved.
I just don't think the interest of the profession control. The travel agents had interests too!
Having plenty of initial discussion and distilling that into requirements documents aimed for modularized components which can all be easily tackled separately is key.
I tried the latest Claude for a very complex wrapper around the AWS Price APIs who are not easy to work with. Down a 2,000 line of code file, I found Claude faking some API returns by creating hard coded values. A pattern I have seen professional developers being caught on while under pressure to deliver.
This will be a boon to the human skilled developers, that will be hired at $900 dollars an hour to fix bugs of a subtlety never seen before.
"Artificial intelligence is profoundly — and probably unfairly — threatening to visual artists"
This feels asserted without any real evidence
Far more importantly, though, artists haven't spent the last quarter century working to eliminate protections for IPR. Software developers have.
Finally, though I'm not stuck on this: I simply don't agree with the case being made for LLMs violating IPR.
I have had the pleasure, many times over the last 16 years, of expressing my discomfort with nerd piracy culture and the coercive might-makes-right arguments underpinning it. I know how the argument goes over here (like a lead balloon). You can agree with me or disagree. But I've earned my bona fides here. The search bar will avail.
And so what? Tell it to the Graphviz diagram creators, entry level Javascript programmers, horse carriage drivers, etc. What's special?
> .. and does so by effectively counterfeiting creative expression
What does this actually mean, though? ChatGPT isn't claiming to have "creative expression" in this sense. Everybody knows that it's generating an image using mathematics executed on a GPU. It's creating images. Like an LLM creates text. It creates artwork in the same sense that it creates novels.
> Far more importantly, though, artists haven't spent the last quarter century working to eliminate protections for IPR. Software developers have.
Programmers are very particular about licenses in opposition to your theory. Copyleft licensing leans heavily on enforcing copyright. Besides, I hear artists complain about the duration of copyright frequently. Pointing to some subset of programmers that are against IPR is just nutpicking in any case.
I think the case we are making is there is no such thing as intellectual property to begin with and the whole thing is a scam created by duck taping a bunch of different concepts together when they should not be grouped together at all.
We need to understand what kind of guard rails to put these models on for optimal results.
We don’t even have a solid education program for software engineering - possibly for the same reason.
The industry loves to run on the bleeding edge, rather than just think for a minute :)
This is the only piece of human work left in the long run, and that’s providing training data on taste. Once we hook up a/b testing on ai creative outputs, the LLM will know how to be creative and not just duplicative. The ai will never have innate taste, but we can feed it taste.
We can also starve it of taste, but that’s impossible because humans can’t stop providing data. In other words, never tell the LLM what looks good and it will never know. A human in the most isolated part of the world can discern what creation is beautiful and what is not.
Things like this are expressions of preference. The discussion will typically devolve into restatements of the original preference and appeals to special circumstances.
I think graphic designers would be a lot less angry if AIs were trained on licensed work… thats how the system worked up until now after all.
That's a bit beside the point, which is that AI will not be just another tool, it will take ALL the jobs, one after another.
I do agree it's absolutely great though, and being against it is dumb, unless you want to actually ban it- which is impossible.
Quite the opposite, I'd say that it's what it has most. What are "hallucinations" if not just a display of immense creativity and intuition? "Here, I'll make up this API call that's I haven't read about anywhere but sounds right".
I have a lot of artist friends but I still appreciate that diffusion models are (and will be with further refinement) incredibly useful tools.
What we're seeing is just the commoditisation of an industry in the same way that we have many, many times before through the industrial era, etc.
I'm an engineer through and through. I can ask an LLM to generate images just fine, but for a given target audience for a certain purpose? I would have no clue. None what so ever. Ask me to generate an image to use in advertisement for Nuka Cola, targeting tired parents? I genuinely have no idea of where to even start. I have absolutely no understanding of the advertisement domain, and I don't know what tired parents find visually pleasing, or what they would "vibe" with.
My feeble attempts would be absolute trash compared to a professional artist who uses AI to express their vision. The artist would be able to prompt so much more effectively and correct the things that they know from experience will not work.
It's the exact same as with coding with an AI - it will be trash unless you understand the hows and the whys.
Busy code I need to generate is difficult to do with AI too. Because then you need to formalize the necessary context for an AI assistant, which is exhausting with an unsure result. So perhaps it is just simpler to write it yourself quickly.
I understand comments being negative, because there is so much AI hype without having to many practical applications yet. Or at least good practical applications. Some of that hype is justified, some of it is not. I enjoyed the image/video/audio synthesis hype more tbh.
Test cases are quite helpful and comments are decent too. But often prompting is more complex than programming something. And you can never be sure if any answer is usable.
How is creative expression required for such things?
Also, I believe that we're just monkey meat bags and not magical beings and so the whole human creativity thing can easily be reproduced with enough data + a sprinkle of randomness. This is why you see trends in supposedly thought provoking art across many artists.
Artists draw from imagination which is drawn from lived experience and most humans have roughly the same lives on average, cultural/country barriers probably produce more of a difference.
Many of the flourishes any artist may use in their work is also likely used by many other artists.
If I commission "draw a mad scientist, use creative license" from several human artists I'm telling you now that they'll all mostly look the same.
Although I've seen a little American TV ads before, that shit's basically radioactively coloured, same as your fizzy drinks.
I believe you, did you try asking ChatGPT or Claude though?
You can ask them a list of highest-level themes and requirements and further refine from there.
This might be how one looks at it in the beginning, when having no experience or no idea about coding. With time one will realize it's more about creating the correct mental model of the problem at hand, rather than the activity of coding itself.
Once this realized, AI can't "save" you days of work, as coding is the least time consuming part of creating software.
C++, Linux: write an audio processing loop for ALSA
reading audio input, processing it, and then outputting
audio on ALSA devices. Include code to open and close
the ALSA devices. Wrap the code up in a class. Use
Camelcase naming for C++ methods.
Skip the explanations.
```
Run it through grok: https://grok.com/
When I ACTUALLY wrote that code the first time, it took me about two weeks to get it right. (horrifying documentation set, with inadequate sample code).Typically, I'll edit code like this from top to bottom in order to get it to conform to my preferred coding idioms. And I will, of course, submit the code to the same sort of review that I would give my own first-cut code. And the way initialization parameters are passed in needs work. (A follow-on prompt would probably fix that). This is not a fire and forget sort of activity. Hard to say whether that code is right or not; but even if it's not, it would have saved me at least 12 days of effort.
Why did I choose that prompt? Because I have learned through use that AIs do will well with these sorts of coding tasks. I'm still learning, and making new discoveries every day. Today's discovery: it is SO easy to implement SQLLite database in C++ using an AI when you go at it the right way!
Whatever can be replaced by AI will, cause it is easier for business people to deal with than real people.
It changed the skill set but it didn’t “kill the graphic arts”
Rotoscoping in photoshop is rotoscoping. Superimposing an image on another in photoshop is the same as with film, it’s just faster and cheaper to try again. Digital painting is painting.
AI doesn’t require an artist to make “art”. It doesn’t require skill. It’s different than other tools
Saas just seems very much like a terminator seed situation in the end.
e.g: MUI, typescript:
// make the checkbox label appear before the checkbox.
Tab. Done. Delete the comment.vs. about 2 minutes wading through the perfectly excellent but very verbose online documentation to find that I need to set the "labelPlacement" attribute to "start".
Or the tedious minutia that I am perfectly capable of doing, but it's time consuming and error-prone:
// execute a SQL update
Tab tab tab tab .... Done, with all bindings and fields done, based on the structure that's passed as a parameter to the method, and the tables and fieldnames that were created in source code above the current line. (love that one).As to creativity, that's something I know too little about to define it, but it seems reasonable that it's even more "fuzzy" than intuition. On the opposite, causal relationships are closer to hard logic, which is what LLMs struggle with- as humans do, too.
Is the matrix a ripoff of the Truman show? Is Oldboy derivative of Oedipus?
Saying everything is derivative is reductive.
> vector art pigeonholes art into something that can be used for machine learning
Look around, AI companies are doing just fine with raster art.
The only thing we agree on is that this will hurt workers
AI currently can’t reliably make 3d objects so AI can’t make you a sculptor.
If I can reduce this even by 10% for 20 dollars it’s a bargain.
Sure it was easier to do it myself. But putting in the time to train, give context, develop guardrails, learn how to monitor etc ultimately taught me the skills needed to delegate effectively and multiply the teams output massively as we added people.
It's early days but I'm getting the same feeling with LLMs. It's as exhausting as training an overconfident but talented intern, but if you can work through it and somehow get it to produce something as good as you would do yourself, it's a massive multiplier.
it might be ok since what you were thinking about is probably not a good idea in the first place for various reasons, but once in a while stars align to produce the unicorn, which you want to be if you're thinking about building something.
caveat: maybe you just want to build in a niche, it's fine to think hard in such places. usually.
"Claude gives up and hardcodes the answer as a solution" - https://www.reddit.com/r/ClaudeAI/comments/1j7tiw1/claude_gi...
Institution scale lack of deep thinking is the main issue.
I haven't tried Claud code yet however. Maybe that approach is more on point.
I'd challenge this one; is it more complex, or is all the thinking and decision making concentrated into a single sentence or paragraph? For me, programming something is taking a big high over problem and breaking it down into smaller and smaller sections until it's a line of code; the lines of code are relatively low effort / cost little brain power. But in my experience, the problem itself and its nuances are only defined once all code is written. If you have to prompt an AI to write it, you need to define the problem beforehand.
It's more design and more thinking upfront, which is something the development community has moved away from in the past ~20 years with the rise of agile development and open source. Techniques like TDD have shifted more of the problem definition forwards as you have to think about your desired outcomes before writing code, but I'm pretty sure (I have no figures) it's only a minority of developers that have the self-discipline to practice test-driven development consistently.
(disclaimer: I don't use AI much, and my employer isn't yet looking into or paying for agentic coding, so it's chat style or inline code suggestions)
You probably don't have those views. But I think Thomas' point is that the profession as a whole has been crying "information wants to be free" for so many years, when what they meant was "information I don't want to pay for wants to be free" - and the hostile response to AI training on private data underlines that.
I have an older Mediawiki install that's been overrun by spam. It's on a server I have root access on. With Claude, I was able to rapidly get some Python scripts that work against the wiki database directly and can clean spam in various ways, by article ID, title regex, certain other patterns. Then I wanted to delete all spam users - defined here as users registered after a certain date whose only edit is to their own user page - and Claude made a script for that very quickly. It even deployed with scp when I told it where to.
Looking at the SQL that ended up in the code, there's non-obvious things such as user pages being pages where page_namespace = 2. The query involves the user, page, actor and revision tables. I checked afterwards, MediaWiki has good documentation for its database tables. Sure, I could have written the SQL myself based on that documentation, but certainly not have the query wrapped in Python and ready to run in under a minute.
But you're not training LLMs as you use them really - do you mean that it's best to develop your own skill using LLMs in an area you already understand well?
I'm finding it a bit hard to square your comment about it being exhausting to catherd the LLM with it being a force multiplier.
You just explained how your work was affected by a big multiplier. At the end of training an intern you get a trained intern -- potentially a huge multiplier. ChatGPT is like an intern you can never train and will never get much better.
These are the same people who would no longer create or participate deeply in OSS (+100x multipler) bragging about the +2x multiplier they got in exchange.
Copilot was what i was looking for, thank you. I have it installed in Webstorm already but I haven't messed with this side of it.
3D models can be generated quite well already. Good enough for a sculpture.
If you know what you're doing you can still "teach" them though, but it's on you to do that - you need to keep on iterating on things like the system prompt you are using and the context you feed in to the model.
In what way are these two not the same? It isn't like icons or ui panels are more original than the code that runs the app.
Or are you saying only artists are creating things of value and it is fine to steal all the work of programmers?
Core to Ptacek's point is that everything has changed in the last 6 months. As you and I presume he agree, the use of off-the-shelf LLMs in code was kinda garbage. And I expect the skepticism he's knocking here ("stochastic parrots") was in fact accurate then.
But it did get a lot of people (and money) to rush in and start trying to make something useful. Like the stone soup story, a lot of other technology has been added to the pot, and now we're moving in the direction of something solid, a proper meal. But given the excitement and investment, it'll be at least a few years before things stabilize. Only at that point can we be sure about how much the stone really added to the soup.
Another counterfactual that we'll never know is what kinds of tooling we would have gotten if people had dumped a few billion dollars into code tool improvement without LLMs, but with, say, a lot of more conventional ML tooling. Would the tools we get be much better? Much worse? About the same but different in strengths and weaknesses? Impossible to say.
So I'm still skeptical of the hype. After all, the hype is basically the same as 6 months ago, even though now the boosters can admit the products of 6 months ago sucked. But I can believe we're in the middle of a revolution of developer tooling. Even so, I'm content to wait. We don't know the long term effects on a code base. We don't know what these tools will look like in 6 months. I'm happy to check in again then, where I fully expect to be again told: "If you were trying and failing to use an LLM for code 6 months ago †, you’re not doing what most serious LLM-assisted coders are doing." At least until then, I'm renewing my membership in the Boring Technology Club: https://boringtechnology.club/
Humans really like to anthropomorphize things. Loud rumbles in the clouds? There must be a dude on top of a mountain somewhere who's in charge of it. Impressed by that tree? It must have a spirit that's like our spirits.
I think a lot of the reason LLMs are enjoying such a huge hype wave is that they invite that sort of anthropomorphization. It can be really hard to think about them in terms of what they actually are, because both our head-meat and our culture has so much support for casting things as other people.
Combine that with when you’re reading the code it’s often much easier to develop a prototype solution as you go and you end up with prompting feeling like using 4 men to carry a wheelbarrow instead of having 1 push it.
Edit: grammar
You mean these?
I use AI everyday but you’ve got hundreds of billions of dollars and Scam Altman (known for having no morals and playing dirty) et al on “your” side. The only thing AI skeptics have is anecdotes and time. Having a principled argument isn’t really possible.
The AI skeptics are mostly correctly reacting to the AI hypists, who are usually shitty linkedin influencer type dudes crowing about how they never have to pay anyone again. its very natural, even intelligent to not trust this now that its filling the same bubble as NFTs a few years ago. I think its okay to stay skeptical and see where the chips fall in a few years at this point.
Different people have different weird tendencies in different directions. Some people irrationally assume that things aren’t going to change much. Others see a trend and irrationally assume that it will continue on a trend line.
Synthesis is hard.
Understanding causality is even harder.
Savvy people know that we’re just operating with a bag of models and trying to choose the right combination for the right situation.
This misunderstanding is one reason why doomers, accelerations, and “normies” talk past each other or (worse) look down on each other. (I’m not trying to claim epistemic equivalence here; some perspectives are based on better information, some are better calibrated than others! I’m just not laying out my personal claims at this point. Instead, I’m focusing on how we talk to each other.)
Another big source of misunderstanding is about differing loci of control. People in positions of influence are naturally inclined to think about what they can do, who they know, and where they want to be. People farther removed feel relatively powerless and tend to hold onto their notions of stability, such as the status quo or their deepest values.
Historically, programmers have been quite willing to learn new technologies, but now we’re seeing widespread examples where people’s plasticity has limits. Many developers cannot (or are unwilling to) wrap their minds around the changing world. So instead of confronting the reality they find ways to deny it, consciously or subconsciously. Our perception itself is shaped by our beliefs, and some people won’t even perceive the threat because it is too strange or disconcerting. Such is human nature: we all do it. Sometimes we’re lucky enough to admit it.
Getting AI to hallucinate its way into secure and better quality code seems like the antithesis of this. Why don't we have AI and robots working for humanity with the boring menial tasks - mowing laws, filing taxes, washing dishes, driving cars - instead of attempting to take on our more critical and creative outputs - image generation, movie generation, book writing and even website building.
Of course, in aggregate AI makes me capable in a far broader set of problem domains. It would be tough to live without it at this stage, but needs to be used for what it is actually good at, not what we hope it will be good at.
> Employment of travel agents is projected to grow 3 percent from 2023 to 2033, about as fast as the average for all occupations.
The last year there is data for claims 68,800 people employed as travel agents in the US. It's not a boom industry by any means, but it doesn't appear they experienced the apocalypse that Hacker News believes they did, either.
I don't know how to easily find historical data, unfortunately. BLS publishes the excel sheets, but pulling out the specific category would have to be done manually as far as I can tell. There's this, I guess: https://www.travelagewest.com/Industry-Insight/Business-Feat...
It appears at least that what happened is, though it may be easier than ever to plan your own travel, there are so many more people traveling these days than in the past that the demand for travel agents hasn't crashed.
Ultimately the thing that impresses me is that LLMs have replaced google search. The thing that disappoints me is that their code is often convincing but wrong.
Coming from a hard-engineering background, anything that is unreliable is categorized as bad. If you come from the move-fast-break-things world of tech, then your tolerance for mistakes is probably a lot higher.
- Split things into small files, today’s model harnesses struggle with massive files
- Write lots of tests. When the language model messes up the code (it will), it can use the tests to climb out. Tests are the best way to communicate behavior.
- Write guides and documentation for complex tasks in complex codebases. Use a language model for the first pass if you’re too lazy. Useful for both humans and LLMs
It’s really: make your codebase welcoming for junior engineers
Has some stats. It seems pretty clear the interests of travel agents did not count for much in the face of technological change.
With LLMs the better I get at the scaffolding and prompting, the less it feels like catherding (so far at least). Hence the comparison.
Or not. I watched Copilot's agent mode get stuck in a loop for most of an hour (to be fair, I was letting it continue to see how it handles this failure case) trying to make a test pass.
Technical? Yes. Hardcore expert premium technical, no. The people who want the service can pay someone with basic to moderate skills a few hundred bucks to spend a day working on it, and that's all good.
Could I get an LLM to do much of the work? Yes, but I could also do much of the work without an LLM. Someone who doesn't understand the first principles of domains, Wordpress, hosting and so on, not so much.
This was actually the only point in the essay with which I disagree, and it weakens the overall argument. Even 2 years ago, before agents or reasoning models, these LLMs were extremely powerful. The catch was, you needed to figure out what worked for you.
I wrote this comment elsewhere: >>44164846 -- Upshot: It took me months to figure out what worked for me, but AI enabled me to produce innovative (probably cutting edge) work in domains I had little prior background in. Yes, the hype should trigger your suspicions, but if respectable people with no stake in selling AI like @tptacek or @kentonv in the other AI thread are saying similar things, you should probably take a closer look.
Yes, but you're expensive.
And these models are getting better at solving a lot of business-relevant problems.
Soon all business-relevant problems will be bent to the shape of the LLM because it's cost-effective.
I agree, but even smaller than thinking in agile is just a tight iteration loop when i'm exploring a design. My ADHD makes upfront design a challenge for me and I am personally much more effective starting with a sketch of what needs to be done and then iterating on it until I get a good result.
The loop of prompt->study->prompt->study... is disruptive to my inner loop for several reasons, but a big one is that the machine doesn't "think" like i do. So the solutions it scaffolds commonly make me say "huh?" and i have to change my thought process to interpet them and then study them for mistakes. My intution and iteration is, for the time being, more effective than this machine assited loop for the really "interesting" code i have to write.
But i will say that AI has been a big time saver for more mundane tasks, especially when I can say "use this example and apply it to the rest of this code/abstraction".
That is why some people don't find AI that essential, if you have the knowledge, you already know how to find a specific part in the documentation to refresh your semantics and the time saved is minuscule.
It’s very unlikely simply training an LLM on “unlicensed” work constitutes infringement. It could possibly be that the model itself, when published, would represent a derivative work, but it’s unlikely that most output would be unless specifically prompted to be.
I’m impressed with this latest generation of models: they reward hack a lot less. Previously they’d change a failing unit test, but now they just look for reasonable but easy ways out in the code.
I call it reward hacking, and laziness is not the right word, but “knowing what needs to be done and not doing it” is the general issue here. I see it in junior engineers occasionally, too.
This is how I use it mostly. I also use it for boilerplate, like "What would a database model look like that handles the following" you never want it to do everything, though there are tools that can and will and they're impressive, but then when you have a true production issue, your inability to quickly respond will be a barrier.
Write an audio processing loop for pipewire. Wrap the code up in a
C++ class. Read audio data, process it and output through an output
port. Skip the explanations. Use CamelCase names for methods.
Bundle all the configuration options up into a single
structure.
Run it through grok. I'd actually use VSCode Copilot Claude Sonnet 4. Grok is being used so that people who do not have access to a coding AI can see what they would get if they did.I'd use that code as a starting point despite having zero knowledge of pipewire. And probably fill in other bits using AI as the need arises. "Read the audio data, process it, output it" is hardly deep domain knowledge.
I picked coding again a couple of days back and I’m blown away by how much things have changed
It was all manual work until a few months back. Suddenly, its all agents
A 5 second search on DDG ("easyeffects") and a 10 second navigation on github.
https://github.com/wwmm/easyeffects/blob/master/src/plugin_b...
But that is GPL 3.0 and a lot of people want to use the license laundering LLM machine.
N.B. I already know about easyeffects from when I was seeking for a software equalizer
EDIT
Another 30 seconds exploration ("pipewire" on DDG, finding the main site, then goes on the documentation page, and the tutorial section).
https://docs.pipewire.org/audio-dsp-filter_8c-example.html
There's a lot of way to find truthful information without playing Russian roulette with an LLM.
To a first approximation, the answer to both of these is "yes".
There is still a lot of graphic design work out there (though generative AI will be sucking the marrow out of it soon), but far less than there used to be before the desktop publishing revolution. And the kind of work changed. If "graphic design" to you meant sitting at a drafting table with pencil and paper, those jobs largely evaporated. If that was a kind of work that was rewarding and meaningful to you, that option was removed for you.
Theatre even more so. Yes, there are still some theatres. But the number of people who get to work in theatrical acting, set design, costuming, etc. is a tiny tiny fraction of what it used to be. And those people are barely scraping together a living, and usually working side jobs just to pay their bills.
> it feels a bit like mourning the loss of punch cards when terminals showed up.
I think people deserve the right to mourn the loss of experiences that are meaningful and enjoyable to them, even if those experiences turn out to no longer be maximally economically efficient according to the Great Capitalistic Moral Code.
Does it mean that we should preserve antiquated jobs and suffer the societal effects of inefficiency without bound? Probably not.
But we should remember that the ultimate goal of the economic system is to enable people to live with meaning and dignity. Efficiency is a means to that end.
"Create a video of a girl running through a field in the style of Studio Ghibli."
There, someone has specifically prompted the AI to create something visually similar to X.
But would you still consider it a derivative work if you replaced the words "Studio Ghibli" with a few sentences describing their style that ultimately produces the same output?
I can only imagine what this technology will be like in 10 years. But I do know that it's not going anywhere and it's best to get familiar with it now.
I think these days coding is 20% of my job, maybe less. But HN is a diverse audience. You have the full range of web programmers and data scientists all the way to systems engineers and people writing for bare metal. Someone cranking out one-off Python and Javascript is going to have a different opinion on AI coding vs a C/C++ systems engineer and they're going to yell at each other in comments until they realize they don't have the same job, the same goals or the same experiences.
I’d add that Excel didn’t kill the engineering field. It made them more effective and maybe companies will need less of them. But it also means more startups and smaller shops can make use of an engineer. The change is hard and an equilibrium will be reached.
Sure, but I would argue that the UX is the product, and that has radically improved in the past 6-12 months.
Yes, you could have produced similar results before, manually prompting the model each time, copy and pasting code, re-prompting the model as needed. I would strenuously argue that the structuring and automation of these tasks is what has made these models broadly usable and powerful.
In the same way that Apple didn't event mobile phones nor touchscreens nor OSes, but the specific combination of these things resulted in a product that was different in kind than what came before, and took over the world.
Likewise, the "putting the LLM into a structured box of validation and automated re-prompting" is huge! It changed the product radically, even if its constituent pieces existed already.
[edit] More generally I would argue that 95% of the useful applications of LLMs aren't about advancing the SOTA model capabilities and more about what kind of structured interaction environment we shove them into.
I think we'll find that over the next few years the first really big win will be AI tearing down the mountain of tech & documentation debt. Bringing efficiency to corporate knowledge is likely a key element to AI working within them.
I think this ends up being recency bias and terminology hairsplitting, in the end. The number of people working in theatre mask design went to nearly zero quite a while back but we still call the stuff in the centuries after that 'theatre' and 'acting'.
One thing I wish he would have talked about though is maintenance. My only real qualm with my LLM agent buddy is the tendency to just keep adding code if the first pass didn't work. Eventually, it works, sometimes with my manual help. But the resulting code is harder to read and reason about, which makes maintenance and adding features or behavior changes harder. Until you're ready to just hand off the code to the LLM and not do your own changes to it, it's definitely something to keep in mind at minimum.
The fastest way I can transcribe a design is with code or pseudocode. Converting it into English can be hard.
It reminds me a bit of the discussion of if you have an inner monologue. I don't and turning thoughts into English takes work, especially if you need to be specific with what you want.
Feed it to an llm and it implements it. Ideally it can also verify it's solution with your specification code. If LLMs don't gain significantly more general capabilities I could see this happening in the longer term. But it's too early to say.
In a sense the llm turns into a compiler.
Relatively speaking, I would say that film and TV did kill theater
40% of all travel agent jobs lost between 2001 and 2025. Glad I'm not a travel agent.
The main lesson has been that it's actually not much of an enabler and the people doing it end up being specialised and rather expensive consultants.
I think "theatre" is a fairly well-defined term to refer to live performances of works that are not strictly musical. Gather up all of the professions necessary to put those productions on together.
The number of opportunities for those professions today is much smaller than it was a hundred years ago before film ate the world.
There are only so many audience members and a night they spend watching a film or watching TV or playing videogames is a night they don't spend going to a play. The result is much smaller audiences. And with fewer audiences, there are fewer plays.
Maybe I should have been clearer that I'm not including film and video production here. Yes, there are definitely opportunities there, though acting for a camera is not at all the same experience as acting for a live audience.
I notice, because the amount of text has been increased tenfold while the amount of information has stayed exactly the same.
This is a torrent of shit coming down on us, that we are all going to have to deal with it. The vibe coders will be gleefully putting up PRs with 12 paragraphs of "descriptive" text. Thanks no thanks!
But the analysis doesn't stop there, because after the raw quality wash, we have to consider things LLMs can do profoundly better than human coders can. Codebase instrumentation, static analysis, type system tuning, formal analysis: all things humans can do, spottily, on a good day but that empirically across most codebases they do not do. An LLM can just be told to spend an afternoon doing them.
I'm a security professional before I am anything else (vulnerability research, software security consulting) and my take on LLM codegen is that they're likely to be a profound win for security.
But I think my other point still stands: people will need to figure out for themselves how to fully exploit this technology. What worked for me, for instance, was structuring my code to be essentially functional in nature. This allows for tightly focused contexts which drastically reduces error rates. This is probably orthogonal to the better UX of current AI tooling. Unfortunately, the vast majority of existing code is not functional, and people will have to figure out how to make AI work with that.
A lot of that likely plays into your point about the work required to make useful LLM-based applications. To expand a bit more:
* AI is technology that behaves like people. This makes it confusing to reason about and work with. Products will need to solve for this cognitive dissonance to be successful, which will entail a combination of UX and guardrails.
* Context still seems to be king. My (possibly outdated) experience has been the "right" context trumps larger context windows. With code, for instance, this probably entails standard techniques like static analysis to find relevant bits of code, which some tools have been attempting. For data, this might require eliminating overfetching.
* Data engineering will be critical. Not only does it need to be very clean for good results, giving models unfettered access to the data needs the right access controls which, despite regulations like GDPR, are largely non-existent.
* Security in general will need to be upleveled everywhere. Not only can models be tricked, they can trick you into getting compromised, and so there need to even more guardrails.
A lot of these are regular engineering work that is being done even today. Only it often isn't prioritized because there are always higher priorities... like increasing shareholder value ;-) But if folks want to leverage the capabilities of AI in their businesses, they'll have to solve all these problems for themselves. This is a ton of work. Good thing we have AI to help out!
I mean, we do have automation for literally all of those things, to varying degrees of effectiveness.
There's an increasing number of little "roomba" style mowers around my neighborhood. I file taxes every year with FreeTaxUSA and while it's still annoying, a lot of menial "form-filling" labor has been taken away from me there. My dishwasher does a better job cleaning my dishes than I would by hand. And though there's been a huge amount of hype-driven BS around 'self-driving', we've undeniably made advances in that direction over the last decade.
"Garbage in, garbage out", is still the rule for LLM's. If you don't spend billions training them or if you let them feed on their own tail too much they produce nonsense. e.g. Some LLM's currently produce better general search results than google. This is mainly a product of many billions being spent on expert trainers for those LLM's, while google neglects (or actively enshitifies) their search algorithms shamefully. It's humans, not LLM's, producing these results. How good will LLM's be at search once the money has moved somewhere else and neglect sets in?
LLM's aren't going to take everyone's jobs and trigger a singularity precisely because they fall apart if they try to feed on their own output. They need human input at every stage. They are going to take some people's jobs and create new ones for others, although it will probably be more of the former than the latter, or billionaires wouldn't be betting on them.
Doesn't it mean cinema too? edit: Even though it was clear from context you meant live theatre.
Things are different now.
My obligatory comment how analogies are not good for arguments: there is already discussion here that film (etc.) may have killed theatre.
Disillusionment seems to spring from expecting the model to be a god or a genie instead of a code generator. Some people are always going to be better at using tools than other people are. I don't see that changing, even though the tools themselves are changing radically.
You'll not only never know this, it's IMHO not very useful to think about at all, except as an intellectual exercise.
I wish i could impress this upon more people.
A friend similarly used to lament/complain that Kotlin sucked in part because we could have probably accomplished it's major features in Java, and maybe without tons of work, or migration cost.
This is maybe even true!
as an intellectual exercise, both are interesting to think about. But outside of that, people get caught up in this as if it matters, but it doesn't.
Basically nothing is driven by pure technical merit alone, not just in CS, but in any field. So my point to him was the lesson to take away from this is not "we could have been more effective or done it cheaper or whatever" but "my definition of effectiveness doesn't match how reality decides effectiveness, so i should adjust my definition".
As much as people want the definition to be a meritocracy, it just isn't and honestly, seems unlikely to ever be.
So while it's 100% true that billions of dollars dumped into other tools or approaches or whatever may have have generated good, better, maybe even amazing results, they weren't, and more importantly, never would have been. Unknown but maybe infinite ROI is often much more likely to see investment than more known but maybe only 2x ROI.
and like i said, this is not just true in CS, but in lots of fields.
That is arguably quite bad, but also seems unlikely to change.
This is why all the lobby now pushes the govs to not allow any regulation of AI even if courts disagree.
IMHO what will happen anyway is that at some point the companies will "solve" the licensing by training models purely on older synthetic LLM output that will be "public research" (which of course will have the "human" weights but they will claim it doesnt matter).
Makes me wonder if there had been equal investment into specialized tools which used more fine-tuned statistical methods (like supervised learning), that we would have something much better then LLMs.
I keep thinking about spell checkers and auto-translators, which have been using machine learning for a while, with pretty impressive results (unless I’m mistaken I think most of those use supervised learning models). I have no doubt we will start seeing companies replacing these proven models with an LLM and a noticeable reduction in quality.
Building a mental model of a new domain by creating a logical model that interfaces with a domain I'm familiar with lets me test my assumptions and understanding in real time. I can apply previous experience by analogy and verify usefulness/accuracy instantly.
> Upshot: It took me months to figure out what worked for me, but AI enabled me to produce innovative (probably cutting edge) work in domains I had little prior background in. Yes, the hype should trigger your suspicions[...]
Part of the hype problem is that describing my experience sounds like bullshit to anyone who hasn't gone through the same process. The rate that I pick up concepts well enough to do verifiable work with them is literally unbelievable.
Which brings me to your comment. The comparison to Uber drivers is apt, and to use a fashionable word these days, the threat to people and startups alike is "enshittification." These tools are not sold, they are rented. Should a few behemoths gain effective control of the market, we know from history that we won't see these tools become commodities and nearly free, we'll see the users of these tools (again, both people and businesses) squeezed until their margins are paper-thin.
Back when articles by Joel Spolsky regularly hit the top page of Hacker News, he wrote "Strategy Letter V:" https://www.joelonsoftware.com/2002/06/12/strategy-letter-v/
The relevant takeaway was that companies try to commoditize their complements, and for LLM vendors, every startup is a complement. A brick-and-mortar metaphor is that of a retailer in a mall. If you as a retailer are paying more in rent than you're making, you are "working for the landlord," just as if you are making less than 30% of profit on everything you sell or rent through Apple's App Store, you're working for Apple.
I once described that as "Sharecropping in Apple's Orchard," and if I'm hesitant about the direction we're going, it's not anything about clinging to punch cards and ferromagnetic RAM, it's more the worry that it's not just a question of programmers becoming enshittified by their tools, it's also the entire notion of a software business "Sharecropping the LLM vendor's fields."
We spend way too much time talking about programming itself and not enough about whither the software business if its leverage is bound to tools that can only be rented on terms set by vendors.
--------
I don't know for certain where things will go or how we'll get there. I actually like the idea that a solo founder could create a billion-dollar company with no employees in my lifetime. And I have always liked the idea of software being "Wheels for the Mind," and we could be on a path to that, rather than turning humans into "reverse centaurs" that labour for the software rather than the other way around.
Once upon a time, VCs would always ask a startup, "What is your Plan B should you start getting traction and then Microsoft decides to compete with you/commoditize you by giving the same thing away?" That era passed, and Paul Graham celebrated it: https://paulgraham.com/microsoft.html
Then when startups became cheap to launch—thank you increased tech leverage and cheap money and YCombinator industrializing early-stage venture capital—the question became, "What is your moat against three smart kids launching a competitor?"
Now I wonder if the key question will bifurcate:
1. What is your moat against somebody launching competition even more cheaply than smart kids with YCombinator's backing, and;
2. How are you insulated against the cost of load-bearing tooling for everything in your business becoming arbitrarily more expensive?
If an LLM learned something when you gave it commands, it would probably be reflected in some adjusted weights in some of its operational matrix. This is true of human learning, we strengthen some neural connection, and when we receive a similar stimuli in a similar situation sometime in the future, the new stimuli will follow a slightly different path along its neural pathway and result in a altered behavior (or at least have a greater probability of an altered behavior). For an LLM to “learn” I would like to see something similar.
The models are trained primarily on copyrighted material and code written by the very professionals who now must "upskill" to remain relevant. This raises complex questions about compensation and ownership that didn't exist with traditional tools. Even if current laws permit it, the ethical implications are different from Photoshop-like tools.
Previous innovations created new mediums and opportunities. Photoshop didn't replace artists, because it enabled new art forms. Film reduced theater jobs but created an entirely new industry where skills could mostly transfer. Manufacturing automation made products like cars accessible to everyone.
AI is fundamentally different. It's designed to produce identical output to human workers, just more cheaply and/or faster. Instead of creating new possibilities, it's primarily focused on substitution. Say AI could eliminate 20% of coding jobs and reduce wages by 30%:
* Unlike previous innovations, this won't make software more accessible
* Software already scales essentially for free (build once, used by many)
* Most consumer software is already free (ad-supported)
The primary outcome appears to be increased profit margins rather than societal advancement. While previous technological revolutions created new industries and democratized access, AI seems focused on optimizing existing processes without providing comparable societal benefits.This isn't an argument against progress, but we should be clear-eyed about how this transition differs from historical parallels, and why it might not repeat the same historical outcomes. I'm not claiming this will be the case, but that you can see some pretty significant differences for why you might be skeptical that the same creation of new jobs, or improvement to human lifestyle/capabilities will emerge as with say Film or Photoshop.
AI can also be used to achieve things we could not do without, that's the good use of AI, things like Cancer detection, self-driving cars, and so on. I'm speaking specifically of the use of AI to automate and reduce the cost/speed of white collar work like software development.
There's also an intangible benefit of having someone to "bounce off". If I'm using an LLM, I am tweaking the system prompt to slow it down, make it ask questions and bug me before making changes. Even without that, writing out the idea displays quickly potential logic or approach flaws - much fast than writing pseudo in my experience.
They're still much cheaper where I am. But regardless, why not take the Uber while it's cheaper?
There's the argument of the taxi industry collapsing (it hasn't yet). Is your concern some sort of long term knowledge loss from programmers and a rug pull? There are many good LLM options out there, they're getting cheaper and the knowledge loss wouldn't be impactful (and rug pull-able) for at least a decade or so.
I'm saying an artform that is meaningful to its participants and allows them to make a living wage while enriching the lives' of others should not be thoughtlessly discarded in slave to the almighty god of economic efficiency. It's not special pleading because I'd apply this to all artforms and all sorts of work that bring people dignity and joy.
I'm not a reactionary luddite saying that we should still be using oil streetlamps so we don't put the lamplighters out of work. But at the same time I don't think we should automatically and carelessly accept the decimation of human meaning and dignity at the altar of shareholder value.
I wrote a bit about that here - I've turned it off: https://simonwillison.net/2025/May/21/chatgpt-new-memory/
That said, this particular argument you are advancing isn't getting so much heat here because of an unfriendly audience that just doesn't want to hear what you have to say. Or that is defensive because of hypocrisy and past copyright transgressions. It is being torn apart because this argument that artists deserve protection, but software engineers don't is unsound special pleading of the kind you criticize in your post.
Firstly, the idea that programmers are uniquely hypocritical about IPR is hyperbole unsupported by any evidence you've offered. It is little more than a vibe. As I recall, when Photoshop was sold with a perpetual license, it was widely pirated. By artists.
Secondly, the idea -- that you dance around but don't state outright -- that programmers should be singled out for punishment since "we" put others out of work is absurd and naive. "We" didn't do that. It isn't the capital owners over at Travelocity that are going to pay the price for LLM displacement of software engineers, it is the junior engineer making $140k/year with a mortgage.
Thirdly, if you don't buy into LLM usage as violating IPR, then what exactly is your argument against LLM use for the arts? Just a policy edict that thou shalt not use LLMs to create images because it puts some working artists out of business? Is there a threshold of job destruction that has to occur for you to think we should ban LLMs use case by use case? Are there any other outlaws/scarlet-letter-bearers in addition to programmers that will never receive any policy protection in this area because of real or perceived past transgressions?
Its why it is impacting so many people, but also having very small changes to everyday "quality of life" kind of metrics (e.g. ability to eat, communicate, live somewhere, etc). It arguably is more about enabling greater inequality and gatekeeping of wealth to capital - where intelligence and merit matters less in the future world. For most people its hard to see where the positives are for them long term in this story; most everyday folks don't believe the utopia story is in anyway probable.
Maybe? Social proof doesn't mean much to me during a hype cycle. You could say the same thing about tulip bulbs or any other famous bubble. Lots of smart people with no stake get sucked in. People are extremely good at fooling themselves. There are a lot of extremely smart people following all of the world's major religions, for example, and they can't all be right. And whatever else is going on here, there are a lot of very talented people whose fortunes and futures depend on convincing everybody that something extraordinary is happening here.
I'm glad you have found something that works for you. But I talk with a lot of people who are totally convinced they've found something that makes a huge difference, from essential oils to functional programming. Maybe it does for them. But personally, what works for me is waiting out the hype cycle until we get to the plateau of productivity. Those months that you spent figuring out what worked are months I'd rather spend on using what I've already found to work.
I do of course agree that some people are just refusing to "wrap their minds around the changing world". But anybody with enough experience in tech can count a lot more instances of "the world is about to change" than "the world really changed". The most recent obvious example being cryptocurrencies, but there are plenty of others. [1] So I think there's plenty of room here for legitimate skepticism. And for just waiting until things settle down to see where we ended up.
I think it's very useful if one wants to properly weigh the value of LLMs in a way that gets beyond the hype. Which I do.
Again, the argument I'm making regarding artists is that LLMs are counterfeiting human art. I don't accept the premise that structurally identical solutions in software counterfeit their originals.
They were not rotting platforms when they evaporated jobs at that particular moment, about 10-15 years ago. There's no universe where people are making money making websites. One could easily collect multi thousand dollars per month just making websites awhile ago before twitter/fb pages just on the side. There is a long history to web development.
Also, the day of the website has been over for quite awhile so I don't even buy the claim that social media is a rotting platform.
Most importantly, I'll embrace the change and hope for the possible abundance.
It’s important that copyright applies to copying/publishing/distributing - you can do whatever you to copyrighted works by yourself.
No doubt. A few years ago there was some HN post with a video of the completely preposterous process of making diagrams for Crafting Interpreters. I didn't particularly need the book nor do I have room for it but I bought it there and then to support the spirit of all-consuming wankery. So I'm not here from Mitch & Murray & Dark Satanic Mills, Inc either. At the same time, I'm not sold on the idea niche art is the source of human dignity that needs societal protection, not because I'm some ogre but because I'm not convinced that's how actual art actually arts or provides meaning or evolves.
Like another Thomas put it
Not for the proud man apart
From the raging moon I write
On these spindrift pages
Nor for the towering dead
With their nightingales and psalms
But for the lovers, their arms
Round the griefs of the ages,
Who pay no praise or wages
Nor heed my craft or art.
Haha, a good way to describe it. :)
> the idea niche art is the source of human dignity that needs societal protection
I mean... have you looked around at the world today? We've got pick at least some sources of human dignity to protect because there seem to be fewer and fewer left.
Trust me on this, at least: I don't need the typing practice.
Now he can't - it's too closed and complicated
Yet, modern cars are way better and almost never breakdown
Don't see how LLMs are any different than any other tech advancement that obfuscates and abstracts the "fundamentals".
[1]: >>26061935
There's an entire field called computer science. ACM provides curricular recommendations that it updates every few years. People spend years learning it. The same can't be said about the field of, prompting.
Or maybe shouldn't enthusiastically repeat the destruction of the open web in favor of billionaire-controlled platforms for surveillance and manipulation.
It was just 2 weeks ago when the utter incompetence of these robots were in full public display [1]. But none of that will matter to greedy corporate executives, who will prioritize short-term cost savings. They will hop from company to company, personally reaping the benefits while undermining essential systems that users and society rely on with robot slop. That's part of the reason why the C-suites are overhyping the technology. After all, no rich executive has faced consequences for behaving this way.
It's not just software engineering jobs that will take a hit. Society as a whole will suffer from the greedy recklessness.
[1]: >>44050152
One big problem with Claude Code vs Cursor is that you have to pay for the cost of getting over the learning curve. With Cursor I could eat the subscription fee and then goof off for a long time trying to figure out how to prompt it well. With Claude Code a bad prompt can easily cost me $5 a pop, which (irrationally, but measurably) hurts more than the one-time monthly fee for Cursor.
Generally speaking, I find it suspect when someone points to failed predictions of disruptive changes without acknowledging successful predictions. That is selection bias. Many predicted disruptive changes do occur.
Most importantly, if one wants to be intellectually honest, one has to engage against a set of plausible arguments and scenarios. Debunking one particular company’s hyperbolic vision for the future might be easy, but it probably doesn’t generalize.
It is telling to see how many predictions can seem obvious in retrospect from the right frame of reference. In a sense (or more than that under certain views of physics), the future already exists, the patterns already exist. We just have to find the patterns — find the lens or model that will help the messy world make sense to us.
I do my best to put the hype to the side. I try to pay attention to the fundamentals such as scaling laws, performance over time, etc while noting how people keep moving the goalposts.
Also wrt the cognitive bias aspect: Cryptocurrencies didn’t threaten to apply significant (if any) downward pressure on the software development labor market.
Also, even cryptocurrency proponents knew deep down that it was a chicken and the egg problem: boosters might have said adoption was happening and maybe even inevitable, but the assumption was right out there in the open. It also had the warning signs of obvious financial fraud, money laundering, currency speculation, and ponzi scheming.
Adoption of artificial intelligence is different in many notable ways. Most saliently, it is not a chicken and egg problem: it does not require collective action. Anyone who does it well has a competitive advantage. It is a race.
(Like Max Tegmark and others, I view racing towards superintelligence as a suicide race, not an arms race. This is a predictive claim that can be debated by assessing scenarios, understanding human nature, and assigning probabilities.)
For some of the free-er licenses this might mostly be just a lack-of-attribution issue, but in the case of some stronger licenses like GPL/AGPL, I'd argue that training a commercial AI codegen tool (which is then used to generate commercial closed-source code) on licensed code is against the spirit of the license, even if it's not against the letter of the license (probably mostly because the license authors didn't predict this future we live in).
Learning how to use a tool once is easy, relearning how to use a tool every six months because of the rapid pace of change is a pain.
While I agree with the skepticism, what specifically is the stake here? Most code assists have usable plans in the $10-$20 range. The investors are apparently taking a much bigger risk than the consumer would be in a case like this.
Aside from the horror stories about people spending $100 in one day of API tokens for at best meh results, of course.
Of course, that still won’t make artists happy, because they think things like styles can be copyrighted, which isn’t true.
Anyway, if you've tried it and it doesn't work for you, fair enough. I'm not going to tell you you're wrong. I'm just bothered by all the people who are out here posting about AI being bad while refusing to actually try it. (To be fair, I was one of them, six months ago...)
How do we know a software engineer is competent? We can’t tell, and damned if we trust that msc he holds.
Computer science, while fundamental, is very little of help in the emergent large scale problems which ”software engineering” tries to tackle.
The key problem is converting capital investment to a working software with given requirements and this is quite unpredictable.
We don’t know how to effectively train software engineers so that software projects would be predictable.
We don’t know how to train software engineers so that employers would trust their degrees as a strong signal of competence.
If there is a university program that, for example FAANGM (or what ever letters forms the pinnacle of markets) companies respect as a clear signal of obvious competence as a software engineer I would like to know what that is.
I tried running the idea on a programming task I did yesterday. "Create a dialog to edit the contents of THIS data structure." It did actually produce a dialog that worked the first time. Admitedly a very ugly dialog. But all the fields and labels and controls were there in the right order with the right labels, and were all properly bound to props of a react control, that was grudgingly fit for purpose. I suspect I could have corrected some of the layout issues with supplementary prompts. But it worked. I will do it again, with supplementary prompts next time.
Anyway. I next thought about how I would specify the behavior I wanted. The informal specification would be "Open the Looping dialog. Set Start to 1:00, then open the Timebase dialog. Select "Beats", set the tempo to 120, and press the back button. Verify that the Start text edit now contains "30:1" (the same time expressed in bars and beats). Set it to 10:1,press the back button, and verify that the corresponding "Loop" <description of storage for that data omited for clarity> for the currently selected plugin contains 20.0. I can actually see that working (and I plan to see if I can convince an AI to turn that into test code for me).
Any imaginable formal specification for that would be just grim. In fact, I can't imagine a "formal" specification for that. But a natural language specification seems eminently doable. And even if there were such a formal specification, I am 100% positive that I would be using natural language AI prompts to generate the specifications. Which makes me wonder why anyone needs a formal language for that.
And I can't help thinking that "Write test code for the specifications given in the previous prompt" is something I need to try. How to give my AI tooling to get access to UI controls though....
Admittedly, you have to wrap LLMs to with stuff to get them to do that. If you want to rewrite the rules to excluded that then I will have to revise my statement that it is "mostly, but not completely true".
:-P
A thing being great doesn’t mean it’s going to generate outsized levels of hype forever. Nobody gets hyped about “The Internet” anymore, because novel use cases aren’t being discovered at a rapid clip, and it has well and throughly integrated into the general milieu of society. Same with GPS, vaccines, docker containers, Rust, etc., but I mentioned the Internet first since it’s probably on a similar level of societal shift as is AI in the maximalist version of AI hype.
Once a thing becomes widespread and standardized, it becomes just another part of the world we live in, regardless of how incredible it is. It’s only exciting to be a hype man when you’ve got the weight of broad non-adoption to rail against.
Which brings me to the point I was originally trying to make, with a more well-defined set of terms: who cares if someone waits until the tooling is more widely adopted, easy to use, and somewhat standardized prior to jumping on the bandwagon? Not everyone needs to undergo the pain of being an early adopter, and if the tools become as good as everyone says they will, they will succeed on their merits, and not due to strident hype pieces.
I think some of the frustration the AI camp is dealing with right now is because y’all are the new Rust Evangelism Strike Force, just instead of “you’re a bad software engineer if you use a memory unsafe languages,” it’s “you’re a bad software engineer if you don’t use AI.”
AI has helped me pick up my pencil and paper again and realize my flawed knowledge, skills, and even flawed approach to AI.
Now i instructed it to never give me code :). not because the code is bad, but my attempts to extract code from it are more based in laziness than efficiency. they are easy to confuse afterall ;(....
I have tons of fun learning with AI, exploring. going on adventures into new topics. Then when i want to really do something, i try to use it for the things i know i am bad at due to laziness, not lack of knowledge. the thing i fell for first...
it helps me explore a space, then i think or am inspired for some creation, and it helps me structure and plan. when i ask it from laziness to give me the code, it helps me overcome my laziness by explaining what i need to do to be able to see why asking for the code was the wrong approach in the first place.
now, that might be different for you. but i have learned i am not some god tier hacker from the spawl, so i realized i need to learn and get better. perhaps you are at the level you can ask it for code and it just works. hats off in that case ;k (i do hope you tested well!)
People have all these feelings about AI hype, and they just have nothing at all to do with what I'm saying. How well the tools work have not much at all to do with the hype level. Usually when someone says that, they mean "the tools don't really work". Not this time.
I think SSR schedulers are a good example of a Machine Learning algorithms that learns from it’s previous interactions. If you run the optimizer you will end up with a different weight matrix, and flashcards will be schedule differently. It has learned how well you retain these cards. But an LLM that is simply following orders has not learned anything, unless you feed the previous interaction back into the system to alter future outcomes, regardless of whether it “remembers” the original interactions. With the SSR, your review history is completely forgotten about. You could delete it, but the weight matrix keeps the optimized weights. If you delete your chat history with ChatGPT, it will not behave any differently based on the previous interaction.
If we believe that authors should be able decide how their work is used then they can for sure say no machine learning. If we dont believe in intelectual property then anything is for grabs. I am ok with it but the corps are not.
But there is a reason why nobody cares about Adobe AI and everybody uses midjourney…
> As a mid-late career coder, I’ve come to appreciate mediocrity.
Then there's also the embracement of anti-intellectualism. "But I don't want to spend time learning X!" is a surprisingly common comment on, er, Hacker News.
So yeah, no surprise that formal education is looked down on. Doesn't make it right though.
If this is true, can you share your initial draft that you asked the AI to rewrite. Am I not right that the initial draft is more concise and better conveys your actual thought, even though it's not as much convincing.
I guess that makes it ok then for artists to pirate Adobe's product. Also, I live in a music industry hub -- Nashville -- you'll have to forgive me if I don't take RIAA at their word that the music industry is in shambles, what with my lying eyes and all.
> Again, the argument I'm making regarding artists is that LLMs are counterfeiting human art. I don't accept the premise that structurally identical solutions in software counterfeit their originals.
I'm aware of the argument you are making. I imagine most of the people here understand the argument you are making. Its just a really asinine argument and is propped up by all manner of special pleading (but art is different, programmers are all naughty pirates that deserve to be punished) and appeals to authority (check my post history - I've established my bona fides.)
There simply is no serious argument to be made that LLMs reproducing one work product and displacing labor is better or worse than an LLM reproducing a different work product and displacing labor. Nobody is going to display some ad graphic from the local botanical garden's flyer for their spring gala at The Met. That's what is getting displaced by LLM. Banksy isn't being put out of business by stable diffusion. The person making the ad for the botanical garden's flyer has market value because they know how to draw things that people like to see in ads. A programmer has value because they know how to write software that a business is willing to pay for. It is as elitist as it is incoherent to say that one person's work product deserves to be protected but another person's does not because of "creativity."
Your argument holds no more water and deserves to be taken no more seriously than some knucklehead on Mastodon or Bluesky harping about how LLMs are going to cause global warming to triple and that no output LLMs produce has any value.
I also think hype cycles and actual progress can have a variety of relationships. After Bubble 1.0 burst, there were years of exciting progress without a lot of hype. Maybe we'll get something similar here, as reasonable observers are already seeing the hype cycle falter. E.g.: https://www.economist.com/business/2025/05/21/welcome-to-the...
And of course, it all hinges on you being right. Which I get you are convinced of, but if you want to be thorough, you have to look at the other side of it.
But even if we look at your notion of stake, you're missing huge chunks of it. Code bases are extremely expensive assets, and programmers are extremely expensive resources. $10 a month is nothing compared to the costs of a major cleanup or rewrite.
I specifically said: "But anybody with enough experience in tech can count a lot more instances of 'the world is about to change' than 'the world really changed'. I pretty clearly understand that sometimes the world does change.
Funnily, I find it suspect when people accuse me of failing to do things I did in the very post they're responding to. So I think this is a fine time for us both to find better ways to spend our time.
I think it's very useful if one wants to properly weigh the value of LLMs in a way that gets beyond the hype. Which I do.
I strive to not criticize people indirectly: my style is usually closer to say New York than San Francisco. If I disagree with something in particular, I try to make that clear without beating around the bush.
But none of that really matters; I'm not so much engaging on the question of whether you are sold on LLM coding (come over next weekend though for the grilling thing we're doing and make your case then!). The only thing I'm engaging on here is the distinction between the hype cycle, which is bad and will get worse over the coming years, and the utility of the tools.
Relative to what came after, which noone could predict would be guaranteed?
The Model T was in fact pretty bad relative to what came after...
> because they now have something else to promote
something else which is better?
i don't understand the inherent cynicism here.
I think that is one interesting question that I'll want to answer before adoption on my projects, but it definitely isn't the only one.
And maybe the hype cycle will get worse and maybe it won't. Like The Economist, I'm starting to see a turn. The amount of money going into LLMs generally is unsustainable, and I think OpenAI's recent raise is a good example: round 11, $40 billion dollar goal, which they're taking in tranches. Already the largest funding round in history, and it's not the last one they'll need before they're in the black. I could easily see a trough of disillusionment coming in the next 18 months. I agree programming tools could well have a lot of innovation over the next few years, but if that happens against a backdrop of "AI" disillusionment, it'll be a lot easier to see what they're actually delivering.
I have no reason to care whether you use AI or not. I'm giving you this advice just for your sake: Consider whether you are taking a big career risk by avoiding learning about the latest tools of your profession.
Desktop publication software killed many jobs. I worked for a publication where I had colleagues that used to typeset, place images, and use a camera to build pages by hand. That required a team of people. Once Quark Xpress and the like hit the scene, one person could do it all, faster.
In terms of illustration, the tools moved from pen and paper to Adobe Illustrator and Aldus / Macromedia Freehand. Which I'd argue was more of a sideways move. You still needed an illustrators skillset to use these tools.
The difference between what I just described and LLM image generation is the tooling changed to streamline an existing skillset. LLM's replace all of it. Just type something and here's your picture. No art / design skill necessary. Obviously, there's no guarantee that the LLM generated image will be any good. So, I'm not sure the Photoshop analogy works here.
I wish you all the best waiting for a future where the legislature and courts decide that LLM output is violative of copyright law only in the visual arts.
> I just don't want to hear any of this from developers.
Well, you seem to have posted about the wrong topic in the wrong forum then. But you’ve heard what you’ve wanted to hear in the discussion related to this post, so maybe that doesn’t really matter.
This is the thing that worries me the most about AI.
The author's ramblings dovetails with this a bit in their "but the craft" section. They vaguely attack the idea of code-golfing and focusing on coding for the craft as essentially incompatible with the corporate model of programming work. And perhaps they're right. If they are, though, this AI wave/hype being mostly about process-streamlining and such seems to be a distillation of that fact.
My thoughts exactly as an ADHD dev.
Was having trouble describing my main issue with LLM-assisted development...
Thank you for giving me the words!
EDIT to add, I said this more completely a while ago: >>34381996
AI company execs also pretty clearly have a politico-economic idea that they are advancing. The tools may stand on their own but what is the broader effect of supporting them?
Just stop with this, it's bulshitty. There's nothing related between LLMs and the migration from punch cards to terminals, nor to photoshop compared to film theatre, literally [nothing]. This is a pretty underwelming way of trying to say people that are critiques of this are akin to nostalgic people that "miss the old good days", when there are more than enough pertinent reasons to disagree with this tech in this case. Basically calling opposing people irrational.
I'm not talking about doom or software dev dying or any bullshit like that, I'm just saying this kind of point you make in the end is not reasonable.
The problem of anti-intellectualism in SE is just the consequence of the field being more "democratized". Or, to put in other words, the mass is stupid and the mass-man is stupidier and primitive.