* People using it as a tool, aware of its limitations and treating it basically as intern/boring task executor (whether its some code boilerplate, or pooping out/shortening some corporate email), or as tool to give themselves summary of topic they can then bite into deeper.
* People outsourcing thinking and entire skillset to it - they usually have very little clue in the topic, are interested only in results, and are not interested in knowing more about the topic or honing their skills in the topic
The second group is one that thinks talking to a chatbot will replace senior developer
And this may be fine in certain cases.
I'm learning German and my listening comprehension is marginal. I took a practice test and one of the exercises was listening to 15-30 seconds of audio followed by questions. I did terribly, but it seemed like a good way to practice. I used Claude Code to create a small app to generate short audio (via ElevenLabs) dialogs and set of questions. I ran the results by my German teacher and he was impressed.
I'm aware of the limitations: Sometimes the audio isn't great (it tends to mess up phone numbers), it can only a small part of my work learning German, etc.
The key part: I could have coded it, but I have other more important projects. I don't care that I didn't learn about the code. What I care about is I'm improving my German.
> Group 1: intern/boring task executor
Yup, that makes sense I'm in group 1.
> Group 2: "outsourcing thinking and entire skillset to it - they usually have very little clue in the topic, are interested only in results"
Also me (in this case), as I'm outsourcing the software development part and just want the final app.
Soo... I probably have thought too much about the original proposed groups. I'm not sure they are as clear as the original suggests.
And the first group thinks that these tools will enable them to replace a whole team of developers.
From my perspective the distinction is more on the supply side and we have two generations of AI tools. The first generation was simply talking to a chatbot in a web UI and it's still got its uses, you chat and build up a context with it, it's relying heavily on its training data, maybe it's reading one file.
The second generation leans into RAG and agentic capabilities (if you can glob and grep or otherwise run a search, congrats you have v1 of your RAG strategy). This is where Gemini actually scans all the docs in our Google Workspace and produces a proposal similar to ones we've written before. (Do we even need document templates anymore?) Or where you start a new programming project and Claude can write all the boilerplate, deploy and set up a barebones test suite within a couple of minutes. There's no doubt that these types of tools give us new capabilities and in some cases save a lot more time than just babbling into chatgpt.com.
I think this accounts for a lot of differences in terms of reported productivity by the sane users. I was way less enthusiastic about AI productivity gains before I discovered the "gen 2" applications.
The word "thinking" can be a bit nebulous in these conversations, and critical thinking perhaps even more ambiguously defined, so before we discuss that, we need to define it. I go with the Merriam-Webster definition: the act or practice of thinking critically (as by applying reason and questioning assumptions) in order to solve problems, evaluate information, discern biases, etc.
LLMs seem to be able to mimic this, particularly to those who have no clue what it means when we call an LLM a "stochastic parrot" or some equally esoteric term. At first I was baffled that anyone really thought that LLMs could somehow apply reason or discern its own biases but I had to take a step back and look at how that public perception was shaped to see what these people were seeing. LLMs, generative AI, ML, etc are all extremely complex things. Couple that with the pervasive notion that thinking is hard and you have a massive pool of consumers who are only too happy to offload some of that thinking on to something they may not fully understand but were promised that it would do what they wanted, which is make their daily lives a bit easier.
We always get snagged by things that promise us convenience or offer to help us do less work. It's pretty human to desire both of those things, but proving to be an Achilles Heel for many. How we characterize AI determines our expectations of it; so do you think of it as a bag of tools you can use to complete tasks? Or is it the whole factory assembly line where you can push a few buttons and an pseudo-finished product comes out the other side?
* people who use it instead of search engines.
* people who use it as a doctor/therapist/confidant. Not to research. But as a practitioner.
There are others:
* people who use it instead of man pages or documentation.
* people who use it for short scripts in a language they don't quite understand but "sorta kinda".
I'm a subject matter expert 45 years in programming and data, aware of the tools limitation but still use it all day every day to implement non-trivial code, all the while using other tools to do voice transcription, internal blog posting about new tools, agents information gathering while I sleep, various classifiers, automated OCR, email scanning, recipe creation, electronics designing, many many other daily tasks.
- Peer reviews. Not the only peer review of code, but a "first pass" to point out anything that I might have missed
- Implementing relatively simple changes; ones where the "how" doesn't require a lot of insight into long term planning
- Smart auto-complete (and this one is huge)
- Searching custom knowledge bases (I use Obsidian and have an AI tied into it to search through my decade+ of notes)
- Smart search of the internet; describing the problem I'm trying to solve and then asking it to find places that discuss that type of thing
- I rarely use it to clean up emails, but it does happen sometimes. My emails tend to be very technical, and "cleaning them up" usually requires I spend time figuring out what information not to include
No one is going to replace senior developers. But senior developer pay WILL decrease relative to its historical values.
It is though. App is using AI underneath to generate audio snippets. That's literally its purpose
This is actually the greatest use case I see, and interact with.
Compared to the mess created by Node.js npm amateur engineers, it really shows who is 10x or 100x.
Outsourcing critical thinking to pattern matching and statistical prediction will make the haystacks even more unmanageable.
A few weeks ago a critical bug came in on a part of the app I’d never touched. I had Claude research the relevant code while I reproduced the bug locally, then had it check the logs. That confirmed where the error was, but not why. This was code that ran constantly without incident.
So I had Claude look at the Excel doc the support person provided. Turns out there was a hidden worksheet throwing off the indices. You couldn’t even see the sheet inside Excel. I had Claude move it to the end where our indices wouldn’t be affected, ran it locally, and it worked. I handed the fixed document back to the support person and she confirmed it worked on her end too.
Total time to resolution: 15 minutes, on a tricky bug in code I’d never seen before. That hidden sheet would have been maddening to find normally. I think we might be strongly overestimating the benefits of knowing a codebase these days.
I’ve been programming professionally for about 20 years. I know this is a period of rapid change and we’re all adjusting. But I think getting overly precious about code in the age of coding agents is a coping mechanism, not a forward-looking stance. Code is cheap now. Write it and delete it.
Make high leverage decisions and let the agent handle the rest. Make sure you’ve got decent tests. Review for security. Make peace with the fact that it’s cheaper to cut three times and measure once than it used to be to measure twice and cut once.
LLMs make me think out loud way better.
Best rubber duck ever.
Junior devs: who have limited experience or depth in knowledge. They are unable to analyze the output of AI coding agents sufficiently to determine long term viability of the code. I think this is the entirety of who you're speaking of.
Senior devs: who are using it for more than a basic task executor. They have a decade+ of experience and can quickly understand if what the AI coding agent suggests is viable long term or not. When it's not, they understand how to steer it into a more appropriate direction.
Because having a job that's somewhat satisfying and not just a grind is great for one's own well-being. It's also not a bad deal for the employer, because an engaged employee delivers better results than one who doesn't give a shit.
Could be for good reasons (e.g. they're security features that are important to the business but add friction for the user) or just because management is disconnected from the reality of their employees. Either way, not necessarily the wrong decision by the PM - sometimes you've gotta build features fast because the buyer demands them in a certain timeframe in order to get the contract signed. Even if they never get used, the revenue still pays the bills.
The problem, as I see it, is the changes that bug me [1] seem systemic throughout the economy, "best practices" promulgated by consultants and other influencers. I'm actually under the impression my workplace was a bit behind the curve, at a lot of other places are worse.
[1] Not sure if they're the "actions" you're talking about. I'm talking about offshoring & AI (IHMO part of the same thrust), and a general increase in pressure/decrease in autonomy.
Sometimes I just want the thing and really don't care about any details. Sometimes I want a very specific thing built in a very specific way. Sometimes I care about some details and not others.
How I use the tools at my disposal depends on what I want to get out of the effort.
Don't care about code quality; never seen the code. I care if the tools do the things I want them to do, and they verifiably do.
Devs are hired goons at worst and skilled craftspeople at best, but never professionals.
Once they realize that it doesn't replace senior but can easily replace junior, junior dev will have a bigger problem and the industry at large will have a huge problem in 8 years because the concept of "senior" would have vanished.
Now AI agents are cheap but they generate a lot of slop, and potential minefields that might be costly to clean. The ROI will show up eventually and people in the second group will find out their jobs might be in danger. Hopefully a third group will come to save them.
Think wider. You, sharperguy, are not and will not the only person with access to these tools. Therefore, your productivity increase will likely be the same as everyone else's. If you are as good as everyone else, why would YOU get paid more? Have you ever seen a significant number of companies outside FAANG permanently boost everyone's salary just because they they did well on a given year?
A company's goal is to the shareholders not to you. Your value exists relative to that of others.
Not necessarily, there are many factors at play here which are downplayed. The first one is education: LLMs are going to significantly improve skill training. Arguably, it is already happening. So the gap between you and a middev will get narrower. At the same time, candidates who can be as good as you will increase.
While you can argue that you possess specialised skills that not many do, you are unlikely to prove that under pressure within a couple of hours and certainly not to the level where you can have late 10s level of negotiating power imo.
At the end of the day, the market can stay irrational longer than you can continue refuse to accept a lower offer imo. I believe there will be winners. But pure technical skill isn't the moat you think it is. Not anymore.
Nah. It's been at least since 2009 (GBC), if not longer.
It started happening with the advent of applicant tracking systems (making hiring a nightmare, which it still is) and the fact that most companies stopped investing into training of juniors and started focusing more on the short-term bottom line.
If the company is going to make it annoying to get hired and won't invest anything in you as a professional, there's 0 reason for loyalty besides giving your time for the paycheck. And 0 reason to go 120% so you burn out.
I consider 8 years to be the real experience to be considered a senior dev.
If from now on, the amount of junior is drastically reduced, this will lead to a lack of senior in 8 years because the senior leaving should be the same proportion.
In a situation where they replace juniors with agents, yes, we'll still be senior, but just like people capable of setting a VHS recorder, our number will dwindle.
I can see the day that all of these folks completing replacing their thinking skill with AI, unable to find job because they can no longer troubleshoot anything without AI.
I use AI as replacement for search engine, I spent 3 nights using ChatGPT to assist me in deploying a Proxmox LXC container running 4 network services and the whole traffic is routed to Proton VPN via WireGuard. If the VPN goes down, the whole container network stops without using my real IP. Everything was done via Ansible which I use to manage my homelad, and was able to identify mistakes and fix them myself. Dude, I have learned a ton with LXC and sort of moving away from VMs.
The most fun one is this, which creates listing images for my products: https://theautomatedoperator.substack.com/p/opus-45-codes-ge...
More recently, I'm using Claude Code to handle my inventory management by having it act as an analyst while coding itself tools to access my Amazon Seller accounts to retrieve the necessary info: https://theautomatedoperator.substack.com/p/trading-my-vibe-...
> People using it as a tool, aware of its limitations
You can't know the limitations of these tools. It is literally unknowable. Depending on the context, the model and the task, it can be brilliant or useless. It might do the task adequately first time, then fail ten times in a row.
> People outsourcing thinking and entire skillset to it
You can't get out of it something that you can't conceive of. There are real life consequences to not knowing what AI produced. What you wrote basically assumes that there is a group who consistently hit themselves on the head with a hammer not knowing what hurt them.
Let's say I have a 5 person company and I vibe-engineer an application to manage shifts and equipment. I "verify" it by seeing with my own eyes that everyone has the tools they need and every shift is covered.
Before I either used an expensive SaaS piece of crap for it or did it with Excel. I didn't "verify" the Excel either and couldn't control when the SaaS provider updated their end, sometimes breaking features, sometimes adding or changing them.
So I learned that you can definitely glean some insights from it. One insight I have is: I'm a "talk out loud thinker". I don't really value that as an identity thing but it is definitely something I notice that I do. I also think a lot of things in my mind, but I tend to think out loud more than the average person.
So yea, that's how pseudo science can sometimes still lead to useful insights about one particular individual. Same thing with philosophy really, usually also not empirically tested (I do think it has a stronger academic grounding but to call philosophy a science is... a bit... tricky... in many cases. I think the common theme is that it's also usually not empirically grounded but still really useful).
On the verification front, a few examples:
1. I built an app that generates listing images and whitebox photos for my products. Results there are verifiable for obvious reasons.
2. I use Claude Code to do inventory management - it has a bunch of scripts to pull the relevant data from Amazon then a set of instructions on how to project future sales and determine when I should reorder. It prints the data that it pulls from Amazon to the terminal, so that's verifiable. In terms of following the instructions on coming up with reorder dates, if it's way off, I'm going to know because I'm very familiar with the brands that I own. This is pretty standard manager/subordinate stuff - I put some trust in Claude to get it right, but I have enough context to know if the results are clearly bad. And if they're only off by a little, then the result is I incur some small financial penalty (either I reorder too late and temporarily stock out or I reorder too early and pay extra storage fees). But that's fine - I'm choosing to make that tradeoff as one always does when one hands off work.
3. I gave Claude Code a QuickBooks API key and use it to do my books. This one gets people horrified, but again, I have enough context to know if anything's clearly wrong, and if things are only slightly off then I will potentially pay a little too much in taxes. (Though to be fair it's also possible it screws up the other way, I underpay in taxes and in that case the likeliest outcome is I just saved money because audits are so rare.)
It's nice to brainstorm with too, but you have to know what you're doing.
It gets stuck on certain things for sure. But all in all it's a great productivity tool. I treat it like an advanced auto complete. That's basically how people need to treat it. You have to spend a lot of time setting up context and detailing what you want.
So does it save time? Yea, it can. It may not in every task, but it can. It's simply another way of coding. It's a great assistant, but it's not replacing a person.
I don't know if this describes your situation, but I know many people who are dealing with positions where they have no technical mentorship, no real engineering culture to grow in, and a lot of deadlines and work pressure. Coupled with this, they often don't have a large social group within programming/tech, because they've only been in it for a few years and have been heads down grinding to get a good job the whole time. They're experiencing a weird mixture of isolation, directionless-ness, and intense pressure. The work is joyless for them, and they don't see a future.
If I can offer any advice, be selfish for a bit. Outsource as much as you want to LLMs, but use whatever time savings you get out of this to spend time on programming-related things you enjoy. Maybe work the tickets you find mildly interesting without LLMs, even if they aren't mission critical. Find something interesting to tinker with. Learn a niche language. Or slack off in a discord group/make friends in programming circles that aren't strictly about career advancement and networking.
I think it's basically impossible to get better past a certain level if you can't enjoy programming, LLM-assisted or otherwise. There's such a focus on "up-skilling" and grinding through study materials in the culture right now, and that's all well and good if you're trying to pass an interview in 6 weeks, but all of that stuff is pretty useless when you're burned out and overwhelmed.
Security features that add friction for the user are usually forced, aren't they?
Contract requirements do make sense, but I get the idea that this user would know that.
What are you imagining that would be actual value but not used for six months?
If every coal miner could suddenly produce 10x the amount of goal, do people say "well now we can just hire one coal miner instead of 10". Or do they say "now thousands of new project which were not economically viable due to the high price of coal are now viable, meaning we actually need to increase our total output beyond even 10x of what it was previously."
I also learned that I absolutely hate most programmers. No offense. But most I've been talking to have a complete lack of ethics. I really love programming but I have a massive issue with how industry scale programming is performed (just offloading infra to AWS, just using random JS libs for everything, buying design templates instead of actually building components yourself, 99% of apps being simple CRUD and I am so incredibly tired of building http based apps, web forms and whatnot...)
I love tech, but the industry does not have a soul. The whole joy of learning new things is diminishing the more I learn about the industry.
Plus, look at the job market. Every single tech company out there has been laying off devs in the last 3 years. If maximising productivity above expenses was so valuable, every tech company out there would be hiring like crazy because senior devs are cheap as chips nowadays. But they aren't, devs might be cheap but money itself isn't right now so they are prioritising lower expenses over increased productivity. Because that makes shareholders happy. And that's what every company aims for.
Maximising productivity is only an absolute goal in the minds of devs not in the minds of executives.