> Does an intern cost $20/month? Because that’s what Cursor.ai costs.
> Part of being a senior developer is making less-able coders productive, be they fleshly or algebraic.
But do you know what another part of being a senior developer is? Not just making them more productive, but also guiding the junior developers into becoming better, independent, self-tasking, senior coders. And that feedback loop doesn't exist here.
We're robbing ourselves of good future developers, because we aren't even thinking about the fact that the junior devs are actively learning from the small tasks we give them.
Will AI completely replace devs before we all retire? Maybe. Maybe not.
But long before that, the future coders who aren't being hired and trained because a senior dev doesn't understand that the junior devs become senior devs (and that's an important pipeline) and would rather pay $20/month for an LLM, are going to become a major loss/ brain drain domestically.
Cursor is a heck of a lot more than $20/month if you actually want it working for a full work day, every day.
I had a rather depressing experience this semester in my office hours with two students who had painted themselves in a corner with code that was clearly generated. They came to me for help, but were incapable of explaining why they had written what was on their screens. I decided to find where they had lost the thread of the class and discovered that they were essentially unable to write a helloworld program. In other words, they lost the thread on day one. Up until this point, both students had nearly perfect homework grades while failing every in-class quiz.
From one perspective I understand the business case for pushing these technologies. But from another perspective, the long term health of the profession, it’s pretty shortsighted. Who knows, in the end maybe this will kill off the group of students who enroll in CS courses “because mom and dad think it’s a good job,” and maybe that will leave me with the group that really wants to be there. In the meantime, I will remind students that there is a difference between programming and computer science and that you really need a strong grasp of the latter to be an effective coder. Especially if you use AI tools.
I see this so much. “Data science major” became the 2020s version of law school. It’s such a double edged sword. It’s led to a huge increase in enrollment and the creation of multiple professional masters programs, so the college loves us. We hire every year and there’s always money for just about anything. On the other hand, class sizes are huge, which is not fun, and worse a large fraction of the students appear to have minimal intrinsic interest in coding or analyzing data. They’re there because it’s where the jobs are. I totally get that, in some sense college has always been that way, but it does make me look back fondly on the days when classes were 1/4 as big and filled with people who were genuinely interested in the subject.
Unfortunately I think I may get my wish. AI is going to eliminate a lot of those jobs and so the future of our field looks a bit bleak. Worse, it’s the very students who are going to become redundant the quickest that are the least willing to learn. I’d be happy to teach them basic analysis and coding skills, but they are dead set on punching everything into ChatGPT.
Is there any interpretation that makes sense _other_ than this?
Eventually you will get a memory leak even in a GCd language. Eventually there will be some incredibly obscure, unreported bug in a library. Eventually you will find an issue in unmaintained code you depend on. Eventually there will be performance problems caused by too many layers of abstraction.
You either need to know, roughly, how your dependencies work by occasionally digging into their code/reading the documentation. Or you need intuition to know how it probably works, but you usually build that intuition by actually writing/reading code.
From a student's perspective: I think it was the same with SO. While LLMs make c&p even easier, they also have the upside of lowering the bar on more complex topics/projects. Nowadays, the average person doesn't touch assembly, but we still had a course where we used it and learned its principles. Software engineering courses will follow suit.
So while I fully agree with you, this is not a concern for a single decision maker in private company world. And state such as US doesn't pick up this work instead, quietly agreeing with this situation.
Well, think for a second who makes similar budget and long term spending focus. Rich lawyers who chose to become much more rich politicians, rarely somebody else and almost never any more moral profession.
This issue manifests a bit differently in people, but I've definitely worked with people (not only juniors) who only have a few productive hours a month in them. And for what it's worth, some of those people were sufficiently productive in those few hours that it was rational for the company to keep them.
Almost every senior developer I know is spending that time making LLM's more productive and useful instead.
Whatever you think the job is of the senior developer, it will not be "coding".
I think people need to stop thinking of themselves as computer programmers and start thinking of themselves as _engineers_. Your job isn't writing programs, your job is _using the technology you have available to solve problems_. Maybe that is through writing code, but maybe it's orchestrating LLM's to write code for you. The important part is solving the problem.
You call it robbing ourselves of good future developers, I call it hourly consultancy rate increase.
He didn't last long.
You could probably hammer the most expensive cursor API all-day every-day and it would still be a fraction of the cost of a junior dev.
Talking to colleagues at work is a chore, and huge risk! Not opportunity! At least AI respects my privacy, and will not get my fired!
This is nothing new. In a computer graphics class I took over 20 years ago, the median score on the assignments before the midterm was >100% (thanks to bonus questions), yet in midterm prep other students in the class were demonstrating that they didn't even have a firm grasp on the basic concept of a matrix.
That is: they were in a 4th year undergrad course, while doubting material from a senior year high school course where they had to have gotten high marks in order to get into the program.
And the midterm grading was heavily curved as a result (though not as much as in some other courses I took).
Students will do what they need to do for the grade. It seems a great many of them have internalized that none of this is about actually learning anything, even if they would never say so aloud. (I learned things - where I didn't already know them - because it was actually interesting. My resulting grades were pretty good overall, but certainly not top of class.)
> Who knows, in the end maybe this will kill off the group of students who enroll in CS courses “because mom and dad think it’s a good job,”
Why would it? It's becoming easier than ever to fake understanding, and to choose anything else they would need both the opportunity and social permission. I only see the problem getting worse.
The problem is the lack of analysis that goes into producing a useful question for others that fits in with the rest of the site.
True, proper analysis of the homework rarely yields such questions, and even less so in 2025. But the point was always to have a question that's about a clear, specific problem, not about a task that was assigned. Because the latter can only possibly help people who were assigned the same task.
Also, I spend a lot of time mentoring, and I'd like to think A will grow to be more like B over time. But now it feels like it's wasted effort to try mentoring those skills if it won't be valued.
My view is that right now, because of the willingness of corporations and other investors to swallow short term (but massive) losses on this, we're basically in AI fiscal fantasy land.
The question we should be asking is how do we get access to these local models in the first place? It's all based on the work of these hyper expensive base models as the best small models are quantisations and distills. None of this works as soon as the profit motive comes into play and companies start gatekeeping effectively, which they will.
LLMs may become more productive/ accurate/ useful, but they're not self-tasking or independent.
> I think people need to stop thinking of themselves as computer programmers and start thinking of themselves as _engineers_. Your job isn't writing programs, your job is _using the technology you have available to solve problems_.
There is a progression of skill required to master any profession, starting with fundamentals, and progressing and developing until you are an expert/ senior at that profession. How is a senior sw dev supposed to become that without writing code? Just reading LLM code and bugfixing isn't the same level or kind of experience. You're going to have devs who can't code by themselves, and that's a bad place to be in.
There are already too many people in IT using tools that they don't understand the workings of (and thus can't troubleshoot, can't replace, can't customize to their env, etc), and this will just exacerbate that x100.
MMW there is going to be a very bad skill deficit in IT in 20 years, which is going to cause an innovation deficit.
There's definitely a cohort of on average much lower quality CS graduates though, between COVID and universities not knowing how to deal with AI (I say this as part of that cohort).
One which is reeling from layoffs caused by short-term profit-maximizing bandwagoning in the tech sector. CEO see, CEO do.
We didn't just magically have tens of thousands of new programmers/ IT pros. pop out of nowhere, they were laid off. There is no situation in which this brain-drain will benefit them. They can't wait around for 10 years for it to get really bad, in the hopes that at that point they'll be back in high demand.