There's an upside to this sort of effort too, though. You actually need to make it crystal clear what your idea is and what it is not, because of the continuous pushback from the agentic programming tool. The moment you stop pushing back, is the moment the LLM rolls over your project and more than likely destroys what was unique about your thing in the first place.
That's not an upside in that it's unique to LLM vs human written code. When writing it yourself, you also need to make it crystal clear. You do that in the language of implementation.
If you use LLMs at very high temperature with samplers which correctly keep your writing coherent (i.e. Min_p, or better like top-h, P-less decoding, etc), than "regression to the mean" literally DOES NOT HAPPEN!!!!
Same problem with image generation (lack of support for different SDE solvers, the image version of LLM sampling) but they have different "coomer" tools, i.e. ComfyUI or Automatic1111
I thought "on-shoring" is already commonly used for the process that undos off-shoring.
LLMs don’t “reason” the same way humans do. They follow text predictions based on statistical relevance. So raising the temperature will more likely increase the likelihood of unexecutable pseudocode than it would create a valid but more esoteric implementation of a problem.
There are tips and tricks on how to manage them and not knowing them will bite you later on. Like the basic thing of never asking yes or no questions, because in some cultures saying "no" isn't a thing. They'll rather just default to yes and effectively lie than admit failure.
Frustrated rants about deliverables aside, I don't think that's the case.
Code that fails to execute or compile is the default expectation for me. That's why we feed compile and runtime errors back into the model after it proposes something each time.
I'd much rather the code sometimes not work than to get stuck in infinite tool calling loops.
If you just chuck ideas at the external coding team/tool you often get rubbish back.
If you're good at managing the requirements and defining things well you can achieve very good things with much less cost.
The humane and the machinic need to meet halfway - any computing endeavor involves not only specifying something clearly enough for a computer to execute it, but also communicating to humans how to benefit from the process thus specified. And that's the proper domain not only of software engineering, but the set of related disciplines (such as the various non-coding roles you'd have in a project team - if you have any luck, that is).
But considering the incentive misalignments which easily come to dominate in this space even when multiple supposedly conscious humans are ostensibly keeping their eyes on the ball, no matter how good the language machines get at doing the job of any of those roles, I will still intuitively mistrust them exactly as I mistrust any human or organization with responsibly wielding the kind of pre-LLM power required for coordinating humans well enough to produce industrial-scale LLMs in the first place.
What's said upthread about the wordbox continually trying to revert you to the mean as you're trying to prod it with the cowtool of English into outputting something novel, rings very true to me. It's not an LLM-specific selection pressure, but one that LLMs are very likely to have 10x-1000xed as the culmination of a multigenerational gambit of sorts; one whose outset I'd place with the ever-improving immersive simulations that got the GPU supply chain going.
So you can just, like, tweak it when it's working against your intent in either direction?
If you are outsourcing to an LLM in this case YOU are still in charge of the creative thought. You can just judge the output and tune the prompts or go deep in more technical details and tradeoffs. You are "just" not writing the actual code anymore, because another layer of abstraction has been added.
I think there needs to be a sea change in the current LLM tech to make that no longer the case - either massively increased context sizes, so they can contain near a career worth of learning (without the tendency to start ignoring that context, as the larger end of the current still-way-too-small-for-this context windows available today), or even allow continuous training passes to allow direct integration of these "learnings" into the weights themselves - which might be theoretically possible today, but is many orders of magnitude higher in compute requirements than available today even if you ignore cost.
Now, the only reason I code and have been since the week I graduated from college was to support my insatiable addictions to food and shelter.
While I like seeing my ideas come to fruition, over the last decade my ideas were a lot larger than I could reasonably do over 40 hours without having other people working on projects I lead. Until the last year and a half where I could do it myself using LLMs.
Seeing my carefully designed spec that includes all of the cloud architecture get done in a couple of days - with my hands on the wheel - that would have taken at least a week with me doing some work while juggling dealing with a couple of other people - is life changing
When you do this with an outsourced team, it can happen at most once per sprint, and with significant pushback, because there's a desire for them to get paid for their deliverable even if it's not what you wanted or suffers some other fundamental flaw.
Then I tried to push through 50000 documents, it crashed and burned like I suspected. It took one day to go from my second more complicated but more scalable spec where I didn’t depend on an AWS managed service to working scalable code.
It would have taken me at least a week to do it myself
Just like people more, and have better meetings.
Life is what you make it.
Enjoy yourself while you can.
They’re destroying the only thing I like about my job - figuring problems out. I have a fundamental impedance mismatch with my company’s desires, because if someone hands me a weird problem, I will happily spend all day or longer on that problem. Think, hypothesize, test, iterate. When I’m done, I write it up in great detail so others can learn. Generally, this is well-received by the engineer who handed the problem to me, but I suspect it’s mostly because I solved their problem, not because they enjoyed reading the accompanying document.
When I play sudoku with an app, I like to turn on auto-fill numbers, and auto-erase numbers, and highlighting of the current number. This is so that I can go directly to the crux of the puzzle and work on that. It helps me practice working on the hard part without having to slog through the stuff I know how to do, and generally speaking it helps me do harder puzzles than I was doing before. BTW, I’ve only found one good app so far that does this really well.
With AI it’s easier to see there are a lot of problems that I don’t know how to solve, but others do. The question is whether it’s wasteful to spend time independently solving that problem. Personally I think it’s good for me to do it, and bad for my employer (at least in the short term). But I can completely understand the desire for higher-ups to get rid of 90% of wheel re-invention, and I do think many programmers spend a lot of time doing exactly that; independently solving problems that have already been solved.
But it is also true that most programming tedious and hardly enriching for the mind. In those cases, LLMs can be a benefit. When you have identified the pattern or principle behind a tedious change, an LLM can work like a junior assistant, allowing you to focus on the essentials. You still need to issue detailed and clear instructions, you still need to verify the work.
Of course, the utility of LLMs is a signal that either the industry is bad at abstracting, or that there's some practical limit.
If I wanted to work on electric power systems I would have become an electrician.
(The transition is happening.)
The hard problems should be solved with our own brains, and it behooves us to take that route so we can not only benefit from the learnings, but assemble something novel so the business can differentiate itself better in the market.
For all the other tedium, AI seems perfectly acceptable to use.
Where the sticking point comes in is when CEOs, product teams, or engineering leadership put too much pressure on using AI for "everything", in that all solutions to a problem should be AI-first, even if it isn't appropriate—because velocity is too often prioritized over innovation.
And worse: with few opportunities to grow their skills from rigorous thinking as this blog post describes. Tech workers will be relegated to cleaning up after sloppy AI codebases.
Granted, you would learn a lot more if you had pieced your ideas together manually, but it all depends on your own priorities. The difference is, you're not stuck cleaning up after someone else's bad AI code. That's the side to the AI coin that I think a lot of tech workers are struggling with, eventually leading to rampant burnout.
Will a company pay me more for knowing those details? Will I be more affectively able to architect and design solutions that a company will pay my employer to contract me to do and my company pays me? They pay me decently not because I “codez real gud”. They pay me because I can go from empty AWS account, empty repo and ambiguous customer requirements to a working solution (after spending time talking to a customer) to a full well thought out architecture + code on time on budget and that meets requirements.
I am not bragging, I’m old those are table stakes to being able to stay in this game for 3 decades
So, tackle other problems. You can now do things you couldn't even have contemplated before. You've been handed a near-godlike power, and all you can do is complain about it?
Harvard Business Review and probably hundreds of other online content providers provide some simple rules for meetings yet people don't even do these.
1. Have a purpose / objective for the meeting. I consider meetings to fall into one of three broad categories information distribution, problem solving, decision making. Knowing this will allow the meeting to go a lot smoother or even be moved to something like an email and be done with it.
2. Have an agenda for the meeting. Put the agenda in the meeting invite.
3. If there are any pieces of pre-reading or related material to be reviewed, attach it and call it out in the invite. (But it's very difficult to get people to spend the time preparing for a meeting.)
4. Take notes during the meeting and identify any action items and who will do them (preferably with an initial estimate). Review these action items and people responsible in the last couple of minutes of the meeting.
5. Send out the notes and action items.
Why aren't we doing these things? I don't know, but I think if everyone followed these for meetings of 3+ people, we'd probably see better meetings.
I'm trying my best to adapt to being a "centaur" in this world. (In Chess it has become statistically evident that Human and Bot players of Chess are generally "worse" than the hybrid "Centaur" players.) But even "centaurs" are going to be increasingly taken for granted by companies, and at least for me the sense is growing that as WOPR declared about tic-tac-toe (and thermo-nuclear warfare) "a curious game, the only way to win is not to play". I don't know how I'd bootstrap an entirely new career at this point in my life, but I keep feeling like I need to try to figure that out. I don't want to just be a janitor of other people's messes for the rest of my life.
It isn't all great, skills that feel important have already started atrophying, but other skills have been strengthened. The hardest part is in being able to pace onself as well as figuring out how to start cracking certain problems.
AI assistance in programming is a service, not a tool. You are commissioning Anthropic, OpenAI, etc. to write the program for you.
This seems to be a common narrative, but TBH I don't really see it. Where is all the amazing output from this godlike power? It certainly doesn't seem like tech is suddenly improving at a faster pace. If anything, it seems to be regressing in a lot of cases.
That's how I have been using AI the entire time. I do not use Claude Code or Codex. I just use AI to ask questions instead of parsing the increasingly poor Google search results.
I just use the chat options in the web applications with manual copy/pasting back and forth if/when necessary. It's been wonderful because I feel quite productive, and I do not really have much of an AI dependency. I am still doing all of my work, but I can get a quicker answer to simple questions than parsing through a handful of outdated blogs and StackOverflow answers.
If I have learned one thing about programming computers in my career, it is that not all documentation (even official documentation) was created equally.
I agree the info is out there about how to run effective meetings.
You get paid in the top 1% globally
You have benefits
Some hope or dreams for what to do with your future, life after work, retirement.
You get to work with other people, overseas.
Talk to those contractors sometimes. They are under tremendous pressure. They are mistreated. One wrong move, they're gone. They undergo tremendous prejudices, and soft racism everyday especially by us FTEs.
You find out that they struggle with the drudgery as well, looking for solutions, better understanding, etc.
We all feel disposable by our corporate masters, but they feel it even more so.
Be the change you want to see in the world.
The coding is the easy part.
With LLMs and advanced models, even more so.
We have lots of documentation. Arguably too much - it quickly fills much of the claude opus context window with relevant documentation alone, and even then repeatedly outputs things directly counter to the documentation it just ingested.
Gone are the days of hopeless Googling where 20 minutes of research becomes 3 hours with the possibility of having zero solutions. The sheer efficiency of having reliable, immediate answers is a tremendous improvement, even if you're choosing to write everything by hand using it as a reference.
Gladly! I think what I would choose is building on-shore teams exclusively. That's the change I'd like to see more of, while overseas teams build their own economies instead of ripping away jobs from domestic citizens in an already difficult job market.