My personal theory is that getting a significant productivity boost from LLM assistance and AI tools has a much steeper learning curve than most people expect.
This study had 16 participants, with a mix of previous exposure to AI tools - 56% of them had never used Cursor before, and the study was mainly about Cursor.
They then had those 16 participants work on issues (about 15 each), where each issue was randomly assigned a "you can use AI" v.s. "you can't use AI" rule.
So each developer worked on a mix of AI-tasks and no-AI-tasks during the study.
A quarter of the participants saw increased performance, 3/4 saw reduced performance.
One of the top performers for AI was also someone with the most previous Cursor experience. The paper acknowledges that here:
> However, we see positive speedup for the one developer who has more than 50 hours of Cursor experience, so it's plausible that there is a high skill ceiling for using Cursor, such that developers with significant experience see positive speedup.
My intuition here is that this study mainly demonstrated that the learning curve on AI-assisted development is high enough that asking developers to bake it into their existing workflows reduces their performance while they climb that learing curve.
Definitely. Effective LLM usage is not as straightforward as people believe. Two big things I see a lot of developers do when they share chats:
1. Talk to the LLM like a human. Remember when internet search first came out, and people were literally "Asking Jeeves" in full natural language? Eventually people learned that you don't need to type, "What is the current weather in San Francisco?" because "san francisco weather" gave you the same, or better, results. Now we've come full circle and people talk to LLMs like humans again; not out of any advanced prompt engineering, but just because it's so anthropomorphized it feels natural. But I can assure you that "pandas count unique values column 'Foo'" is just as effective an LLM prompt as "Using pandas, how do I get the count of unique values in the column named 'Foo'?" The LLM is also not insulted by you talking to it like this.
2. Don't know when to stop using the LLM. Rather than let the LLM take you 80% of the way there and then handle the remaining 20% "manually", they'll keep trying to prompt to get the LLM to generate what they want. Sometimes this works, but often it's just a waste of time and it's far more efficient to just take the LLM output and adjust it manually.
Much like so-called Google-fu, LLM usage is a skill and people who don't know what they're doing are going to get substandard results.
It is not as straightforward as people are told to believe!
Noting a few important points here:
1. Some prior studies that find speedup do so with developers that have similar (or less!) experience with the tools they use. In other words, the "steep learning curve" theory doesn't differentially explain our results vs. other results.
2. Prior to the study, 90+% of developers had reasonable experience prompting LLMs. Before we found slowdown, this was the only concern that most external reviewers had about experience was about prompting -- as prompting was considered the primary skill. In general, the standard wisdom was/is Cursor is very easy to pick up if you're used to VSCode, which most developers used prior to the study.
3. Imagine all these developers had a TON of AI experience. One thing this might do is make them worse programmers when not using AI (relatable, at least for me), which in turn would raise the speedup we find (but not because AI was better, but just because with AI is much worse). In other words, we're sorta in between a rock and a hard place here -- it's just plain hard to figure out what the right baseline should be!
4. We shared information on developer prior experience with expert forecasters. Even with this information, forecasters were still dramatically over-optimistic about speedup.
5. As you say, it's totally possible that there is a long-tail of skills to using these tools -- things you only pick up and realize after hundreds of hours of usage. Our study doesn't really speak to this. I'd be excited for future literature to explore this more.
In general, these results being surprising makes it easy to read the paper, find one factor that resonates, and conclude "ah, this one factor probably just explains slowdown." My guess: there is no one factor -- there's a bunch of factors that contribute to this result -- at least 5 seem likely, and at least 9 we can't rule out (see the factors table on page 11).
I'll also note that one really important takeaway -- that developer self-reports after using AI are overoptimistic to the point of being on the wrong side of speedup/slowdown -- isn't a function of which tool they use. The need for robust, on-the-ground measurements to accurately judge productivity gains is a key takeaway here for me!
(You can see a lot more detail in section C.2.7 of the paper ("Below-average use of AI tools") -- where we explore the points here in more detail.)
Maybe the LLM doesn't strictly need it, but typing out does bring some clarity for the asker. I've found it helps a lot to catch myself - what am I even wanting from this?
My working hypothesis is that people who are fast at scanning lots of text (or code for that matter) have a serious advantage. Being able to dismiss unhelpful suggestions quickly and then iterating to get to helpful assistance is key.
Being fast at scanning code correlates with seniority, but there are also senior developers who can write at a solid pace, but prefer to take their time to read and understand code thoroughly. I wouldn't assume that this kind of developer gains little profit from typical AI coding assistance. There are also juniors who can quickly read text, and possibly these have an advantage.
A similar effect has been around with being able to quickly "Google" something. I wouldn't be surprised if this is the same trait at work.
I don't have any studies, but it eems to me reasonable to assume.
(Unlike google, where presumably it actually used keywords anyway)
I've found AI to be quite helpful in pointing me in the right direction when navigating an entirely new code-base.
When it's code I already know like the back of my hand, it's not super helpful, other than maybe doing a few automated tasks like refactoring, where there have already been some good tools for a while.
I totally agree with this. Although also, you can end up in a bad spot even after you've gotten pretty good at getting the AI tools to give you good output, because you fail to learn the code you're producing well.
A developer gets better at the code they're working on over time. An LLM gets worse.
You can use an LLM to write a lot of code fast, but if you don't pay enough attention, you aren't getting any better at the code while the LLM is getting worse. This is why you can get like two months of greenfield work done in a weekend but then hit a brick wall - you didn't learn anything about the code that was written, and while the LLM started out producing reasonable code, it got worse until you have a ball of mud that neither the LLM nor you can effectively work on.
So a really difficult skill in my mind is continually avoiding temptation to vibe. Take a whole week to do a month's worth of features, not a weekend to do two month's worth, and put in the effort to guide the LLM to keep producing clean code, and to be sure you know the code. You do want to know the code and you can't do that without putting in work yourself.
In practice I have not had any issues getting information out of an LLM when speaking to them like a computer, rather than a human. At least not for factual or code-related information; I'm not sure how it impacts responses for e.g. creative writing, but that's not what I'm using them for anyway.
How can you be so sure? Did you compare in a systematic way or read papers by people who did it?
Now I surely get results giving the llm only snippets and keywords, but anything complex, I do notice differences the way I articulate. Not claiming there is a significant difference, but it seems to me this way.
No, but I didn't need to read scientific papers to figure how to use Google effectively, either. I'm just using a results-based analysis after a lot of LLM usage.
You hit the nail on the head here.
I feel like I’ve seen a lot of people trying to make strong arguments that AI coding assistants aren’t useful. As someone who uses and enjoys AI coding assistants, I don’t find this research angle to be… uh… very grounded in reality?
Like, if you’re using these things, the fact that they are useful is pretty irrefutable. If one thinks there’s some sort of “productivity mirage” going on here, well OK, but to demonstrate that it might be better to start by acknowledging areas where they are useful, and show that your method explains the reality we’re seeing before using that method to show areas where we might be fooling ourselves.
I can maybe buy that AI might not be useful for certain kinds of tasks or contexts. But I keep pushing their boundaries and they keep surprising me with how capable they are, so it feels like it’ll be difficult to prove otherwise in a durable fashion.
The over-optimism is indeed a really important takeaway, and agreed that it's not tool-dependent.
LLMs have a v. steep and long learning curve as you posit (though note the points from the paper authors in the other reply).
Current LLMs just are not as good as they are sold to be as a programming assistant and people consistently predict and self-report in the wrong direction on how useful they are.
Most people who subscribe to that narrative have some connection to "AI" money, but there might be some misguided believers as well.
You’ve been given a dubiously capable genie that can write code without you having to do it! If this thing can build first drafts of those side projects you always think about and never get around to, that in and of itself is useful! If it can do the yak-shaving required to set up those e2e tests you know you should have but never have time for it is useful!
Have it try out all the dumb ideas you have that might be cool but don’t feel worth your time to boilerplate out!
I like to think we’re a bunch of creative people here! Stop thinking about how it can make you money and use it for fun!
How do we get beyond that?
I've found that there are a couple of things you need to do to be very efficient.
- Maintain an architecture.md file (with AI assistance) that answers many of the questions and clarifies a lot of the ambiguity in the design and structure of the code.
- A bootstrap.md file(s) is also useful for a lot of tasks.. having the AI read it and start with a correct idea about the subject is useful and a time saver for a variety of kinds of tasks.
- Regularly asking the AI to refactor code, simplify it, modularize it - this is what the experienced dev is for. VIBE coding generally doesn't work as AI's tend to write messy non-modular code unless you tell them otherwise. But if you review code, ask for specific changes.. they happily comply.
- Read the code produced, and carefully review it. And notice and address areas where there are issues, have the AI fix all of these.
- Take over when there are editing tasks you can do more efficiently.
- Structure the solution/architecture in ways that you know the AI will work well with.. things it knows about.. it's general sweet spots.
- Know when to stop using the AI and code it yourself.. particuarly when the AI has entered the confusion doom loop. Wasting time trying to get the AI to figure out what it's never going to is best used just fixing it yourself.
- Know when to just not ever try to use AI. Intuitively you know there's just certain code you can't trust the AI to safely work on. Don't be a fool and break your software.
----
I've found there's no guarantee that AI assistance will speed up any one project (and in some cases slow it down).. but measured cross all tasks and projects, the benefits are pretty substantial. That's probably others experience at this point too.
I agree. I have found that I can use agents most effectively by letting it write code in small steps. After each step I do review of the changes and polish it up (either by doing the fixups myself or prompting). I have found that this helps me understanding the code, but also avoids that the model gets in a bad solution space or produces unmaintainable code.
I also think this kind of close-loop is necessary. Like yesterday I let an LLM write a relatively complex data structure. It got the implementation nearly correct, but was stuck, unable to find an off-by-one comparison. In this case it was easy to catch because I let it write property-based tests (which I had to fix up to work properly), but it's easy for things to slip through the cracks if you don't review carefully.
(This is all using Cursor + Claude 4.)
Everything else in your post is so reasonable and then you still somehow ended up suggesting that LLMs should be quadrupling our output
LLMs have made the distinction ambiguous because their capabilities are so poorly understood. When I say "you should talk to an LLM like it's a computer", that's a workflow statement; it's a more efficient way to accomplish the same goal. You can try it for yourself and see if you agree. I personally liken people who talk to LLMs in full, proper English, capitalization and all, to boomers who still type in full sentences when running a Google query. Is there anything strictly wrong with it? Not really. Do I believe it's a more efficient workflow to just type the keywords that will give you the same result? Yes.
Workflow efficiencies can't really be scientifically evaluated. Some people still prefer to have desktop icons for programs on Windows; my workflow is pressing winkey -> typing the first few characters of the program -> enter. Is one of these methods scientifically more correct? Not really.
So, yeah -- eventually you'll either find your own workflow or copy the workflow of someone you see who is using LLMs effectively. It really is "just trust me, bro."
TLDR: over the first 8 issues, developers do not appear to get majorly less slowed down.
[1] https://metr.org/Early_2025_AI_Experienced_OS_Devs_Study.pdf
IMO 80% is way too much, LLMs are probably good for things that are not your domain knowledge and you can efford to not be 100% correct, like rendering the Mandelbrot set, simple functions like that.
LLMs are not deterministic sometimes they produce correct code and other times they produce wrong code. This means one has to audit LLM generated code and auditing code takes more effort than writing it, especially if you are not the original author of the code being audited.
Code has to be 100% deterministic. As programmers we write code, detailed instructions for the computer (CPU), we have developed allot of tools such as Unit Tests to make sure the computer does exactly what we wrote.
A codebase has allot of context that you gain by writing the code, some things just look wrong and you know exactly why because you wrote the code, there is also allot of context that you should keep in your head as you write the code, context that you miss from simply prompting an LLM.
Are we are still selling the "you are an expert senior developer" meme ? I can completely see how once you are working on a mature codebase LLMs would only slow you down. Especially one that was not created by an LLM and where you are the expert.
I recall an adage about work-estimation: As chunks get too big, people unconsciously substitute "how possible does the final outcome feel" with "how long will the work take to do."
People asked "how long did it take" could be substituting something else, such as "how alone did I feel while working on it."
One thing that happened here is that they aren't using current LLMs:
> Most issues were completed in February and March 2025, before models like Claude 4 Opus or Gemini 2.5 Pro were released.
That doesn't mean this study is bad! In fact, I'd be very curious to see it done again, but with newer models, to see if that has an impact.
4% more idle time 20% more AI interaction time
The 28% less coding/testing/research is why developers reported 20% less work. You might be spending 20% more time overall "working" while you are really idle 5% more time and feel like you've worked less because you were drinking coffee and eating a sandwich between waiting for the AI and reading AI output.
I think the AI skill-boost comes from having work flows that let you shave half that git-ops time, cut an extra 5% off coding, but cut the idle/waiting and do more prompting of parallel agents and a bit more testing then you really are a 2x dev.
AI has a lot of potential but it's way over-hyped right now. Listen to the people on the ground who are doing real work and building real projects, none of them are over-hyping it. It's mostly those who have tangentially used LLMs.
It's also not surprising that many in this thread are clinging to a basic premise that it's 3 steps backwards to go 5 steps forward. Perhaps that is true but I'll take the study at face value, it seems very plausible to me.
I think LLMs shine when you need to write a higher volume of code that extends a proven pattern, quickly explore experiments that require a lot of boilerplate, or have multiple smaller tasks that you can set multiple agents upon to parallelize. I've also had success in using LLMs to do a lot of external documentation research in order to integrate findings into code.
If you are fine-tuning an algorithm or doing domain-expert-level tweaks that require a lot of contextual input-output expert analysis, then you're probably better off just coding on your own.
Context engineering has been mentioned a lot lately, but it's not a meme. It's the real trick to successful LLM agent usage. Good context documentation, guides, and well-defined processes (just like with a human intern) will mean the difference between success and failure.
I've been hearing this for 2 years now
the previous model retroactively becomes total dogshit the moment a new one is released
convenient, isn't it?
Poor stack overflow, it looks like they are the ones really hurting from all this.
Yes, it might make a difference, but it is a little tiresome that there's always a “this is based on a model that is x months old!” comment, because it will always be true: an academic study does not get funded, executed, written up, and published in less time.
Even then though, “technology gets better over time” shouldn’t be surprising, as it’s pretty common.
More generally, the phenomenon this is quite simply explained and nothing surprising: New things improve, quickly. That does not mean that something is good or valuable but it's how new tech gets introduced every single time, and readily explains changing sentiment.
Sure you may end up missing out on a good thing and then having to come late to the party, but coming early to the party too many times and the beer is watered down and the food has grubs is apt to make you cynical the next time a party announcement comes your way.
It'll also apply to isolated-enough features, which is still a small amount of someone's work (not often something you'd work on for a full month straight), but more people will have experience with this.
If you pay attention to who says it, you'll find that people have different personal thresholds for finding llms useful, not that any given person like steveklabnik above keeps flip-flopping on their view.
This is a variant on the goomba fallacy: https://englishinprogress.net/gen-z-slang/goomba-fallacy-exp...
For context, I've been using AI, a mix of OpenAi + Claude, mainly for bashing out quick React stuff. For over a year now. Anything else it's generally rubbish and slower than working without. Though I still use it to rubber duck, so I'm still seeing the level of quality for backend.
I'd say they're only marginally better today than they were even 2 years ago.
Every time a new model comes out you get a bunch of people raving how great the new one is and I honestly can't really tell the difference. The only real difference is reasoning models actually slowed everything down, but now I see its reasoning. It's only useful because I often spot it leaving out important stuff from the final answer.
"No, the 2.8 release is the first good one. It massively improves workflows"
Then, 6 months later, the study comes out.
"Ah man, 2.8 was useless, 3.0 really crossed the threshold on value add"
At some point, you roll your eyes and assume it is just snake oil sales
Like the boy who cried wolf, it'll eventually be true with enough time... But we should stop giving them the benefit of the doubt.
_____
Jan 2025: "Ignore last month's models, they aren't good enough to show a marked increase in human productivity, test with this month's models and the benefits are obvious."
Feb 2025: "Ignore last month's models, they aren't good enough to show a marked increase in human productivity, test with this month's models and the benefits are obvious."
Mar 2025: "Ignore last month's models, they aren't good enough to show a marked increase in human productivity, test with this month's models and the benefits are obvious."
Apr 2025: [Ad nauseam, you get the idea]
Just two years ago, this failed.
> Me: What language is this: "esto está escrito en inglés"
> LLM: English
Gemini and Opus have solved questions that took me weeks to solve myself. And I'll feed some complex code into each new iteration and it will catch a race condition I missed even with testing and line by line scrutiny.
Consider how many more years of experience you need as a software engineer to catch hard race conditions just from reading code than someone who couldn't do it after trying 100 times. We take it for granted already since we see it as "it caught it or it didn't", but these are massive jumps in capability.
Yes, and I'll add that there is likely no single "golden workflow" that works for everybody, and everybody needs to figure it out for themselves. It took me months to figure out how to be effective with these tools, and I doubt my approach will transfer over to others' situations.
For instance, I'm working solo on smallish, research-y projects and I had the freedom to structure my code and workflows in a way that works best for me and the AI. Briefly: I follow an ad-hoc, pair-programming paradigm, fluidly switching between manual coding and AI-codegen depending on an instinctive evaluation of whether a prompt would be faster. This rapid manual-vs-prompt assessment is second nature to me now, but it took me a while to build that muscle.
I've not worked with coding agents, but I doubt this approach will transfer over well to them.
I've said it before, but this is technology that behaves like people, and so you have to approach it like working with a colleague, with all their quirks and fallibilities and potentially-unbound capabilities, rather than a deterministic, single-purpose tool.
I'd love to see a follow-up of the study where they let the same developers get more familiar with AI-assisted coding for a few months and repeat the experiment.
Of course it's possible that at some point you get to a model that really works, irrespective of the history of false claims from the zealots, but it does mean you should take their comments with a grain of salt.
I would argue you don't need the "as a programming assistant" phrase as right now from my experience over the past 2 years, literally every single AI tool is massively oversold as to its utility. I've literally not seen a single one that delivers on what it's billed as capable of.
They're useful, but right now they need a lot of handholding and I don't have time for that. Too much fact checking. If I want a tool I always have to double check, I was born with a memory so I'm already good there. I don't want to have to fact check my fact checker.
LLMs are great at small tasks. The larger the single task is, or the more tasks you try to cram into one session, the worse they fall apart.
As with anything, your miles may vary: I’m not here to tell anyone that thinks they still suck that their experience is invalid, but to me it’s been a pretty big swing.
Sure they may get even more useful in the future but that doesn’t change my present.
The developer who has experience using cursor saw a productivity increase not because he became better at using cursor, but because he became worse at not using it.
Every hype cycle feels like this, and some of them are nonsense and some of them are real. We’ll see.
I assume that many large companies have tested efficiency gains and losses of there programmers much more extensively than the authors of this tiny study.
A survey of companies and their evaluation and conclusions would carry more weight—-excluding companies selling AI products, of course.
(Unless one believes the most grandiose prophecies of a technological-singularity apocalypse, that is.)
But: if all developers did 136 AI-assisted issues, why only analyze excluding the 1st 8, rather than, say, the first 68 (half)?
I’ve also noticed that, generally, nobody likes maintaining old systems.
so where does this leave us as software engineers? Should I be excited that it’s easy to spin up a bunch of code that I don’t deeply understand at the beginning of my project, while removing the fun parts of the project?
I’m still grappling with what this means for our industry in 5-10 years…
* the release of agentic workflow tools
* the release of MCPs
* the release of new models, Claude 4 and Gemini 2.5 in particular
* subagents
* asynchronous agents
All or any of these could have made for a big or small impact. For example, I’m big on agentic tools, skeptical of MCPs, and don’t think we yet understand subagents. That’s different from those who, for example, think MCPs are the future.
> At some point, you roll your eyes and assume it is just snake oil sales
No, you have to realize you’re talking to a population of people, and not necessarily the same person. Opinions are going to vary, they’re not literally the same person each time.
There are surely snake oil salesman, but you can’t buy anything from me.
Right.
> except that that is the same thing the same people say for every model release,
I did not say that, no.
I am sure you can find someone who is in a Groundhog Day about this, but it’s just simpler than that: as tools improve, more people find them useful than before. You’re not talking to the same people, you are talking to new people each time who now have had their threshold crossed.
Same. For me the turning point was VS Code’s Copilot Agent mode in April. That changed everything about how I work, though it had a lot of drawbacks due to its glitches (many of these were fixed within 6 or so weeks).
When Claude Sonnet 4 came out in May, I could immediately tell it was a step-function increase in capability. It was the first time an AI, faced with ambiguous and complicated situations, would be willing to answer a question with a definitive and confident “No”.
After a few weeks, it became clear that VS Code’s interface and usage limits were becoming the bottleneck. I went to my boss, bullet points in hand, and easily got approval for the Claude Max $200 plan. Boom, another step-function increase.
We’re living in an incredibly exciting time to be a skilled developer. I understand the need to stay skeptical and measure the real benefits, but I feel like a lot of people are getting caught up in the culture war aspect and are missing out on something truly wonderful.
It’s been a majority of my projects for the past two months. Not because work changed, but because I’ve written a dozen tiny, personalised tools that I wouldn’t have written at all if I didn’t have Claude to do it.
Most of them were completed in less than an hour, to give you an idea of the size. Though it would have easily been a day on my own.
This is visible under extreme time pressure of producing a working game in 72 hours (our team scores consistenly top 100 in Ludum Dare which is a somewhat high standard).
We use a popular Unity game engine all LLMs have wealth of experience (as in game development in general), but the output is 80% so strangely "almost correct but not usable" that I cannot take the luxury of letting it figure it out, and use it as fancy autocomplete. And I also still check docs and Stackoverflow-style forums a lot, because of stuff it plainly mades up.
One of the reasons is maybe our game mechanics often is a bit off the beaten road, though the last game we made was literally a platformer with rope physics (LLM could not produce a good idea how to make stable and simple rope physics under our constraints codeable in 3 hours time).
This is my intuition as well. I had a teammate use a pretty good analogy today. He likened vibe coding to vacuuming up a string in four tries when it only takes one try to reach down and pick it up. I thought that aligned well with my experience with LLM assisted coding. We have to vacuum the floor while exercising the "difficult skill [of] continually avoiding temptation to vibe"
Actually, it works well so long as you tell them when you’ve made a change. Claude gets confused if things randomly change underneath it, but it has no trouble so long as you give it a short explanation.
> My personal theory is that getting a significant productivity boost from LLM assistance and AI tools has a much steeper learning curve than most people expect.
This is what I heard about strong type systems (especially Haskell's) about 20-15 years ago."History does not repeat, but it rhymes."
If we rhyme "strong types will change the world" with "agentic LLMs will change the world," what do we get?
My personal theory is that we will get the same: some people will get modest-to-substantial benefits there, but changes in the world will be small if noticeable at all.
no, it's the same names, again and again
The study used 246 tasks across 16 developers, for an average of 15 tasks per developer. Divide that further in half because tasks were assigned as AI or not-AI assisted, and the sample size per developer is still relatively small. Someone would have to take the time to review the statistics, but I don’t think this is a case where you can start inferring that the developers who benefited from AI were just better at using AI tools than those who were not.
I do agree that it would be interesting to repeat a similar test on developers who have more AI tool assistance, but then there is a potential confounding effect that AI-enthusiastic developers could actually lose some of their practice in writing code without the tools.
In contrast, what do I care if you believe in code generation AI? If you do, you are probably driving up pricing. I mean, I am sure that there are people that care very much, but there is little inherent value for me in you doing so, as long as the people who are building the AI are making enough profit to keep it running.
With regards to the VCs, well, how many VCs are there in the world? How many of the people who have something good to say about AI are likely VCs? I might be off by an order of magnitude, but even then it would really not be driving the discussion.
Generally, I do a couple of edits for clarity after posting and reading again. Sometimes that involves removing something that I feel could have been said better. If it does not work, I will just delete the comment. Whatever it was must not have been a super huge deal (to me).
A much simpler explanation is what your parent offered. And to many behavioralists it is actually the same explanation, as to a true scotsm... [cough] behavioralist personality is simply learned habits, so—by Occam’s razor—you should omit personality from your model.
Also, my long experience is that even in PoC phase, using a type system adds almost zero extra time… of course if you know the type system, which should be trivial in any case after you’ve seen a few.
We're in a hype cycle, and it means we should be extra critical when evaluating the tech so we don't get taken in by exaggerated claims.
An LLM that can test the code it is writing and then iterate to fix the bugs turns out to be a huge step forward from LLMs that just write code without trying to then exercise it.
That sounds like a claim you could back up with a little bit of time spent using Hacker News search or similar.
(I might try to get a tool like o3 to run those searches for me.)
The short version is that devs want to give instructions instead of ask for what outcome they want. When it doesn’t follow the instructions, they double down by being more precise, the worst thing you can do. When non devs don’t get what they want, they add more detail to the description of the desired outcome.
Once you get past the control problem, then you have a second set of issues for devs where the things that should be easy or hard don’t necessarily map to their mental model of what is easy or hard, so they get frustrated with the LLM when it can’t do something “easy.”
Lastly, devs keep a shit load of context in their head - the project, what they are working on, application state, etc. and they need to do that for LLMs too, but you have to repeat themselves often and “be” the external memory for the LLM. Most devs I have taught hate that, they actually would rather have it the other way around where they get help with context and state but want to instruct the computer on their own.
Interestingly, the best AI assisted devs have often moved to management/solution architecture, and they find the AI code tools brought back some of the love of coding. I have a hypothesis they’re wired a bit differently and their role with AI tools is actually closer to management than it is development in a number of ways.
So are you using Claude Code via the max plan, Cursor, or what?
I think I'd definitely hit AI news exhaustion and was viewing people raving about this agentic stuff as yet more AI fanbois. I'd just continued using the AI separate as setting up a new IDE seemed like too much work for the fractional gains I'd been seeing.
Nobody is denying that people have personalities btw. Not even true behavioralists do that, they simply argue from reductionism that personality can be explained with learning contingencies and the reinforcement history. Very few people are true behavioralists these days though, but within the behavior sciences, scientists are much more likely to borrow missing factors (i.e. things that learning contingencies fail to explain) from fields such as cognitive science (or even further to neuroscience) and (less often) social science.
What I am arguing here, however, is that the appeal to personality is unnecessary when explaining behavior.
As for figuring out what personality is, that is still within the realm of philosophy. Maybe cognitive science will do a better job at explaining it than psychometricians have done for the past century. I certainly hope so, it would be nice to have a better model of human behavior. But I think even if we could explain personality, it still wouldn’t help us here. At best we would be in a similar situation as physics, where one model can explain things traveling at the speed of light, while another model can explain things at the sub-atomic scale, but the two models cannot be applied together.
There is a skill gap, like, I think of it like vim: at first it slows you down, but then as you learn it, you end up speeding up. So you may also find that it doesn't really vibe with the way you work, even if I am having a good time with it. I know people who are great engineers who still don't like this stuff, just like I know ones that do too.
I do not program for my day job and I vibe coded two different web projects. One in twenty mins as a test with cloudflare deployment having never used cloudflare and one in a week over vacation (and then fixed a deep safari bug two weeks later by hammering the LLM). These tools massively raise the capabilities for sub-average people like me and decrease the time / brain requirements significantly.
I had to make a little update to reset the KV store on cloudflare and the LLM did it in 20s after failing the syntax twice. I would’ve spent at least a few minutes looking it up otherwise.
Developers' own skills might atrophy, when they don't write that much code themselves, relying on AI instead.
And now when comparing with/without AI they're faster with. But a year ago they might have been that fast or faster without an AI.
I'm not saying that that's how things are. Just pointing out another way to interpret what GP said
The people not buying into the hype, on the other hands, are actually the ones that have a very good reason to be invested, because if they turn out to be wrong they might face some very uncomfortable adjustments in the job landscape and a lot of the skills that they worked so hard to gain and believed to be valuable.
As always, be weary of any claims, but the tension here is very much the reverse of crypto and I don't think that's very appreciated.
The jump has been massive.
It's been a very noticeable uptick in power, and although there have been some nice increases with past model releases, this has been both the largest and the one that has unlocked the most real value since I've been following the tech.
The CTO and VPEng at my company (very small, still do technical work occasionally) both love the agent stuff so much. Part of it for them is that it gives them the opportunity to do technical work again with the limited time they have. Without having to distract an actual dev, or spend a long time reading through the codebase, they can quickly get context for an build small items themselves.
I think an easy measure to help identify why a slow down is happening would be to measure how much refactoring happened on the AI generated code. Often times it seems to be missing stuff like error handling, or adds in unnecessary stuff. Of course this assumes it even had a working solution in the first place.
This suggests me though that they are bad at coding, otherwise they would have stayed longer. And I can't find anything in your comment that would corroborate the opposite. So what gives?
I am not saying what you say is untrue, but you didn't give any convincing arguments to us to believe otherwise.
Also, you didn't define the criteria of getting better. Getting better in terms of what exactly???
It's completely normal in development. How many years of programming experience you need for almost any language? How many days/weeks you need to use debuggers effectively? How long from the first contact with version control until you get git?
I think it's the opposite actually - it's common that new classes of tools in tech need experience to use well. Much less if you're moving to something different within the same class.
This is going to be interesting long-term. Realistically people don't spend anywhere close to 100% of time working and they take breaks after intense periods of work. So the real benefit calculation needs to include: outcome itself, time spent interacting with the app, overlap of tasks while agents are running, time spent doing work over a long period of time, any skill degradation, LLM skills, etc. It's going to take a long time before we have real answers to most of those, much less their interactions.
Is that perhaps because of the nature of the category of 'tech peoduct'. In other domains, this certainly isn't the case. Especially if the goal is to get the best result instead of the optimum output/effort balance.
Musical instruments are a clear case where the best results are down to the user. Most crafts are similar. There is the proverb "A bad craftsman blames his tools" that highlights that there are entire fields where the skill of the user is considered to be the most important thing.
When a product is aimed at as many people as the marketers can find, that focus on individual ability is lost and the product targets the lowest common denominator.
They are easier to use, but less capable at their peak. I think of the state of LLMs analogous to home computing at a stage of development somewhere around Altair to TRS-80 level. These are the first ones on the scene, people are exploring what they are good for, how they work, and sometimes putting them to effective use in new and interesting ways. It's not unreasonable to expect a degree of expertise at this stage.
The LLM equivalent of a Mac will come, plenty of people will attempt to make one before it's ready. There will be a few Apple Newtons along the way that will lead people to say the entire notion was foolhardy. Then someone will make it work. That's when you can expect to use something without expertise. We're not there yet.
Nothing new this time except for people who have no vision and no ability to work hard not “getting it” because they don’t have the cognitive capacity to learn
[0]: https://marketplace.visualstudio.com/items?itemName=anthropi...
The most useful thing of all would have been to have screen recordings of those 16 developers working on their assigned issues, so they could be reviewed for varying approaches to AI-assisted dev, and we could be done with this absurd debate once and for all.
Maybe, but it isn't hard to think of developer tools where this is the case. This is the entire history of editor and IDE wars.
Imagine running this same study design with vim. How well would you expect the not-previously-experienced developers to perform in such a study?
If my phone keeps crashing or if the browser is slow or clunky then yes, it’s not on me, it’s the phone, but an LLM is a lot more open ended in what it can do. Unlike the phone example above where I expect it to work from a simple input (turning it on) or action (open browser, punch in a url), what an LLM does is more complex and nuanced.
Even the same prompt from different users might result in different output - so there is more onus on the user to craft the right input.
Perhaps that’s why AI is exempt for now.
Took me a week to build those tools. Its much more reliable (and flexible) than any LLM and cost me nothing.
It comes with secure Auth, email, admin, ect ect.. Doesn't cost me a dime and almost never has a common vulnerability.
Best part about it. I know how my side project runs.
All take quite an effort to master, until then they might slow one down or outright kill.
Or they care about producing value, not just the code, and realized they had more leverage and impact in other roles.
> And I can't find anything in your comment that would corroborate the opposite.
I didn’t try and corroborate the opposite.
Honestly, I don’t care about the “best coders.” I care about people who do their job well, sometimes that is writing amazing code but most of the time it isn’t. I don’t have any devs in my company who work in a magical vacuum where they are handed perfectly written tasks, they complete them, and then they do the next one.
If I did, I could replace them with AI faster.
> Also, you didn't define the criteria of getting better. Getting better in terms of what exactly?
Delivery velocity - bug fixes, features, etc. that pass testing/QA and goes to prod.
I don't think this is a confounding effect
This is something that we definitely need to measure and be aware of, if there is a risk of it
While the results are going to be similar, typing a question in full can help you think about it yourself too, as if the LLM is a rubber duck that can respond back.
I've found myself adjusting and rewriting prompts during the process of writing them before i ask the LLM anything because as i was writing the prompt i was thinking about the problem simultaneously.
Of course for simple queries like "write me a function in C that calculates the length of a 3d vector using vec3 for type" you can write it like "c function vec3 length 3d" or something like that instead and the LLM will give more or less the same response (tried it with Devstral).
But TBH to me that sounds like programmers using Vim claiming they're more productive than users of other editors because they have to use less keystrokes.
For nasty, legacy codebases there is only so much you can do IMO. With green field (in certain domains), I become more confident every day that coding will be reduced to an AI task. I’m learning how to be a product manager / ideas guy in response
I very much think that these things are going to wind up being massive amplifiers for people who were already extremely sophisticated and then put massive effort into optimizing them and combining them with other advanced techniques (formal methods, top-to-bottom performance orientation).
I don't think this stuff is going to democratize software engineering at all, I think it's going to take the difficulty level so high that it's like back when Djikstra or Tony Hoare was a fairly typical computer programmer.
> Interestingly, the best AI assisted devs have often moved to management/solution architecture
Is it just me? Or does it seem to others as well that you pretty much rank these people even at the moment and your first comment contradicts your second comment? Especially when you admit that you rank them based on velocity.
I am not saying you shouldn't do that, but it feels to me like rating road construction workers on the number of potholes fixed, even though it's very possible that the potholes are caused by the sloppy work to begin with.
Not what I would want to do.
Apple's Response to iPhone 4 Antenna Problem: You're Holding It Wrong https://www.wired.com/2010/06/iphone-4-holding-it-wrong/
The OP qualifies how the marketing cycle for this product is beyond extreme, and its own category.
Normal people are being told to worry about AI ending the world, or all jobs disappearing.
Simply saying “the problem is the user”, without acknowledging the degree of hype, and expectation setting, the is irresponsible.
A good debugger is very easy to use. I remember the Visual Studio debugger or the C++ debugger on Windows were a piece of cake 20 years ago, while gdb is still painful today. Java and .NET had excellent integrated debuggers while golang had a crap debugging story for so long that I don’t even use a debugger with it. In fact I almost never use debuggers any more.
Version control - same story. CVS for all its problems I had learned to use almost immediately and it had a GUI that was straightforward. git I still have to look up commands for in some cases. Literally all the good git UIs cost a non-trivial amount of money.
Programming languages are notoriously full of unnecessary complexity. Personal pet peeve: Rust lifetime management. If this is what it takes, just use GC (and I am - golang).
That said, if the language has GC and other helpers, it makes it easier to scan.
Code and architecture review is an important part of my role and I catch issues that others miss because I spend more time. I did use AI for review (GPT 4.1), but only as an addition, since not reliable enough.
LLMs have given me a whole new love of coding, getting rid of the dull grind and letting me write code an order of magnitude quicker than before.
That's not a tradeoff that I like
https://www.businessinsider.com/apple-antennagate-scandal-ti...
It’s just a fun geeky thing to use with a lot of zany customizations. And after two hellish years of memory muscling enough keyboard bindings to finally be productive, you earned it! It’s a badge of pride!
But we all know you’re still fat fingering ggdG on occasion and silently cursing to yourself.
how much did that uproar and settlement matter?
I think you are reading what you want to read and not what I said, so yes it is you. The most productive, valuable people with developer titles in my organizations are not the ones who write the cleanest, most beautiful, most perfect code. They do all of the other parts of the job well and write solid code.
Following the introduction of AI tools, many of the people in my organization who most effectively learned to use those tools are people who previously chose to move to manager and SA roles.
Not only are these not contradictory, they fit quite well together. People who do the things around coding well, but maybe have to work hard at writing the actual code, are better at using the AI tools than exceptional coders. For my organization, the former are generally more valuable than the latter without AI, and that is increasing as a result of AI.
> I am not saying you shouldn't do that, but it feels to me like rating road construction workers on the number of potholes fixed, even though it's very possible that the potholes are caused by the sloppy work to begin with.
Not if your measurement includes quality testing the pothole repairs, which mine does, as I explicitly called out. I work in industries with extensive, long testing cycles, we are (imperfectly, of course) able to measure productivity based on things which make it through those cycles.
You are trying very hard to find ways to ignore what I am saying. It is fine if you don’t want to believe me, but these things have been true based on our observations:
A. Great “coders” have a much harder time picking up AI dev tools and using them effectively, and when they see how others use them they will admit that isn’t how they use them. They will revert to their previous habits and give up on the tools.
B. The productivity gains for the people who are good at using the tools, as measured by velocity with a minimum bar for quality (with substantial QA), are very high.
C. We have measured these things to thoroughly understand the ROI and we are accelerating our investment in AI coding tools as a result.
Some caveats I am absolutely willing to make - we are not working on bleeding edge tech doing things no one has ever done before.
We failed to effectively use AI many times before we started to get it right.
There are developers who are slower with the AI code tools than without it.
im using claude + vscode's cline extension for the most part, but where it tends to excel is helping you write documentation, and then using that documentation to write reasonable code.
if you're 3/4 of the way done, a lot of the docs of what it wants to work well are gonna be missing, and so a lot of your intentions about why you did or didnt make certain choices will be missing. if you've got good docs, make sure to feed those in as context.
the agentic tool on its own is still kinda meh, if you only try to write code directly from it. definitely better than the non-agentic stuff, but if you start with trying to get it to document stuff, and ask you questions about what it should know in order to make the change its pretty good.
even if you dont get perfect code, or it spins in a feedback loop where its lost the plot, those questions it asks can be super handy in terms of code patterns that you havent thought about that apply to your code, and things that would usually be undefined behaviour.
my raving is that i get to leave behind useful docs in my code packages, and my team members get access to and use those docs, without the usual discoverability problems, and i get those docs for... somewhat slower than i could have written the code myself, but much much faster than if i also had to write those docs
e.g., Nokia 1600 user guide from 2005 (page 16) [0]
[0] https://www.instructionsmanuals.com/sites/default/files/2019...
I believe that this is okay. One does not need to know the details about every specific git command in order to be able to use it efficiently most of the time.
It is the same with a programming language. Most people are unfamiliar with every peculiarity of every standard library function that the language offers. And that is okay. It does not prevent them from using language efficiently most of the time.
Also in other aspects of life, it is unnecessary to know everything by memory. For example, one does not need to know how to e.g. replace a blade on a lawn mower. But that is okay. It does not prevent them from using it efficiently most of the time.
The point is that if something is done less often, it is unnecessary to remember the specifics of it. It is fine to look it up when needed.
If what you write was true, then the rate of bugs of those incredible devs would simply fall to zero at one point, and at that point they would become a legend who we all would have heard of by now. So the whole story sounds too fishy to my taste.
It's OK if you want to manage your team this way. Everyone needs some external feedback to confirm their own bias. It seems you found yours and it works for you.
It's just not a good argument in support of AI or AI assisted development.
It's too anecdotal.
And since you are the one who are telling me that you are right, and not others, it makes me even more skeptical about the whole story.
Sure they are - or at least were, unitl the last couple years. Same thing with Emacs.
It's hard to claim this now, because the entire industry shifted towards webshit and cloud-based practices across the board, and the classical editors just can't keep up with VS Code. Despite the latter introducing LSP, which leveled the playing field wrt. code intelligence itself, the surrounding development process and the ecosystem increasingly demands you use web-based or web-derived tools and practices, which all see a browser engine as a basic building block. Classical editors can't match the UX/DX on that, plus the whole thing breaks basic assumptions about UI that were the source of the "10x perf gains" in vim and Emacs.
Ironically, a lot of the perf gains from AI come from letting you avoid dealing with the brokenness of the current tools and processes, that vim and Emacs are not equipped to handle.
Therefore, classical marketing is less dominant, although more present at down-stream sellers.
You brought up Rust, it is fascinating.
The Rust's type system differs from typical Hindle-Milner by having operations that can remove definitions from environment of the scope.
Rust was conceived in 2006.
In 2006 there already were HList papers by Oleg Kiselyov [1] that had shown how to keep type level key-value lists with addition, removal and lookup, and type-level stateful operations like in [2] were already possible, albeit, most probably, not with nice monadic syntax support.
[1] https://okmij.org/ftp/Haskell/HList-ext.pdf
[2] http://blog.sigfpe.com/2009/02/beyond-monads.html
It was entirely possible to have prototype Rust to be embedded into Haskell and have borrow checker implemented as type-level manipulation over double parameterized state monad.But it was not, Rust was not embedded into Haskell and now it will never get effects (even as weak as monad transformers) and, as a consequence, will never get proper high performance software transactional memory.
So here we are: everything in Haskell's strong type system world that would make Rust better was there at the very beginning of the Rust journey, but had no impact on Rust.
Rhyme that with LLM.
Sorry to be pedantic but this is really common in tech products: vim, emacs, any second-brain app, effectiveness of IDEs depending on learning its features, git, and more.
My original point was about history and about how can we extract possible outcome from it.
My other comment tries to amplify that too. Type systems were strong enough for several decades now, had everything Rust needed and more years before Rust began, yet they have little penetration into real world, example being that fancy dandy Rust language.
I understand your point, but would counter with: gdb isn't marketed as a cuddly tool that can let anyone do anything.
Also, I guess you're saying I'm a paid shill, or have otherwise been brainwashed by marketing of the vendors, and therefore my positive experiences with LLMs are a lie? :).
I mean, you probably didn't mean that, but part of my point is that you see those positive reports here on HN too, from real people who've been in this community for a while and are not anonymous Internet users - you can't just dismiss that as "grassroot marketing".
I don't like it. I know it is the way it is because it's supposed to support all the cursed weird stuff you can do in JS, but to me as a fullstack developer who's never really taken the time to deep dive and learn TS properly it often feels more like an obstacle. For my own code it's fine, but when I have to work with third party libraries it can be really confusing. It's definitely a skill issue though.
I can appreciate that it’s other users who are saying it’s wrong, but that doesn’t escape the point on ignoring the context.
Moreover, it’s unhelpful communication. Its gives up acknowledging a mutually shared context, the natural confusion that would arise from the ambiguous, high level hype, and the actual down to earth reality.
Even if you have found a way to make it work, having someone understand your workflow can’t happen without connecting the dots between their frame of reference and yours.
That's pretty extreme.
We have 2 sibling teams, one the genAI devs and the other the regular GPU product devs. It is entirely unsurprising to me that the genAI developers are successfully using coding agents with long-running plans, while the GPU developers are still more at the level of chat-style back-and-forth.
At the same time, everyone sees the potential, and just like other automation movements, are investing in themselves and the code base.
With more coaching from me, which I might end up doing, I think he would get further. But I expected the chatbot to get him further through the process than this. My conclusion so far is that this technology won't meaningfully shift the balance of programmers to non-programmers in the general population.
It’s just not comparable to the LLM crazy hype train.
And to belabor your other point, I have treesitter, lsp, and GitHub Copilot agent all working flawlessly in neovim. Ts and lsp are neovim builtins now. And it’s custom built for exactly how I want it to be, and none of that blinking shit or nagging dialog boxes all over VSCode.
I have VScode and vim open to the same files all day quite literally side by side, because I work at Microsoft, share my screen often, and there are still people that have violent allergic reactions to a terminal and vim. Vim can do everything VSCode does and it’s not dogshit slow.
For example I whipped together a Steam API -based tool that gets my game library and enriches it with data available in maybe 30 minutes of active work.
The LLM (Cursor with Gemini Pro + Claude 3.7 at the time IIRC) spent maybe 2-3 hours on it while I watched some shows on my main display and it worked on my second screen with me directing it.
Could I have done it myself from scratch like a proper artisan? Most definitely. Would I have bothered? Nope.
More generally, these execs are talking their book as they're in a low margin capital intensive businesses whose future is entirely dependent on raising a bunch more money, so hype and insane claims are necessary for funding.
Now, maybe they do sortof believe it, but if so, why do they keep hiring software engineers and other staff?
Correct, I think you've read too much into it. Grassroots marketing is not a pejorative term, either. Its strategy is to trigger positive reviews about your product, ideally by independent, credible community members, indeed.
That implies that those community members have motivations other than being paid. Ideologies and shared beliefs can be some of them. Being happy about the product is a prerequisite, whatever that means for the individual user.
> By early 2030, the robot economy has filled up the old SEZs, the new SEZs, and large parts of the ocean. The only place left to go is the human-controlled areas. [...]
> The new decade dawns with Consensus-1’s robot servitors spreading throughout the solar system. By 2035, trillions of tons of planetary material have been launched into space and turned into rings of satellites orbiting the sun. The surface of the Earth has been reshaped into Agent-4’s version of utopia: datacenters, laboratories, particle colliders, and many other wondrous constructions doing enormously successful and impressive research.
This scenario prediction, which is co-authored by a former OpenAI researcher (now at Future of Humanity Institute), received almost 1 thousand upvotes here on HN and the attention of the NYT and other large media outlets.
If you read that and still don't believe the AI hype is _extreme_ then I really don't know what else to tell you.
--
But for very large stable codebases it is a mixed bag of results. Their selection of candidates is valid but it probably illustrates a worst case scenario for time based measurement.
If an AI code editor cannot make more changes quicker than a dev or cannot provide relevant suggestions quick enough/without being distracting then you lose time.
It’s much more useful getting something off the ground than maintaining a huge codebase.
Very often, I have entire tasks that I can't offload to the Agent. I won't say I'm 20x more productive, it's probably more in the range of 15% to 20% (but I can't measure that obviously).
The steam-powered loom was not good for the luddites either. Good for society at large in the long term but all the negative points that a 40 year old knitter in 1810 could make against the steam-powered loom would have been perfectly reasonable and accurate judged on that individual's perspective.
I think we can be more open minded that an absolutely brand new technology (literally did not exist 3y ago) might require some amount of learning and adjusting, even for people who see themselves as an Einstein if only they wished to apply themselves.
Same can be said for version control and programming.
It's not showing its reasoning. "Reasoning" models are trained to output more tokens in the hope that more tokens means less hallucinations.
It's just a marketing trick and there is no evidence this sort of fake ""reasoning"" actually gives any benefit.
Same thought came when I was reading the article and glad I am not alone.
Anecdotally, most common productivity boost is coming from cutting down weird slow steps in processes. Write an automation script, campaign previewer for marketing, etc etc.
Coding seems to transform to be a more efficient (again anecdotally) but not entirely faster. You can do a better work on a new feature in the same or slightly smaller time.
Idle time at 4% was interesting. I think this number goes higher the more you use a specific tool and adjust your workflow to that
Its not that I don't like tinkering. I really enjoy tinkering with config files but I never could understand nvim personally since I usually want a lsp / good enough experience that nvim or any lunarvim etc. couldn't provide without me installing additional software.
I pointed this out in my post for a reason. I get it. But even given a different person is saying the same thing every time a new release comes out - the effect on my prior is the same.
You'd be hard-pressed to find a popular editor without vim bindings.
The main difference I see is that LLMs are flaky, getting better over time, but still more so than traditional tooling like debuggers.
> Programming languages are notoriously full of unnecessary complexity. Personal pet peeve: Rust lifetime management. If this is what it takes, just use GC (and I am - golang).
Lifetime management is an inherently hard problem, especially if you need to be able to reason about it at compile time. I think there are some arguments to be made about tooling or syntax making reasoning about lifetimes easier, but not trivial. And in certain contexts (e.g., microcontrollers) garbage collectors are out of the question.
Keep writing your code manually, nobody cares.
You clearly don't have a slightest idea of what you're talking about.
Emacs is actually still amazing in the LLM era. Language is all about plain text. Plain text remains crucial and will remain important because it's human-readable, machine-parsable, version-control friendly, lightweight and fast, platform-independent, and resistant to obsolescence. Even when analyzing huge amounts of complex data - images, videos, audio-recordings, etc., we often have to reduce it to text representation.
And there's simply no tool better than Emacs today that is well-suited for dealing with plain text. Nothing even comes close to what you can do with text in Emacs.
Like, check this out - I am right now transcribing my audio notes into .srt (subtitle) files. There's subed-mode where you can read through subtitles, and even play the audio, karaoke style, while following the text. I can do so many different things from here - extract the summaries, search through things, gather analytics - e.g., how often have I said 'fuck' on Wednesdays, etc.
I can similarly play YouTube videos in mpv, while controlling the playback, volume, speed, etc. from Emacs; I can extract subtitles for a given video and search through them, play the vid from the exact place in the subs.
I very often grab a selected region of screen during Zoom sessions to OCR and extract text within it and put it in my notes - yes, I do it in Emacs.
I can probably examine images, analyze their elements, create comprehensive summaries, and formulate expert artistic evaluation and critique and even ask Emacs to read it aloud back to me - the possibilities are virtually limitless.
It allows you to engage with vast array of LLM models from anywhere. I can ask a question in the midst of typing a Slack reply or reading HN comments or when composing a git commit; I can fact-check my own assumptions. I can also use tools to analyze and refactor existing codebases and vibe-code new stuff.
Anything like that even five years ago seemed like a dream; today it is possible. We can now reduce any complex digital data to plain text. And that feels miraculous.
If anything, the LLM era has made Emacs an extremely compelling choice. To be honest, for me - it's not even a choice, it's the only seriously viable option I have - despite all its drawbacks. Everything else doesn't even come close - other options either lacking critical features or have merely promising ones. Emacs is absolutely, hands-down, one of the best tools we humans have ever produced to deal with plain text. Anyone who thinks it's an opinion and not a fact simply hasn't grokked Emacs or has no clue what you can do with it.
Cognitive science was able to explain stuff like biases, pattern recognition, language, etc. which behavioral science thought they could explain, but couldn’t. In the 1950s it was really the only game in town (except for psychometrics which failed in a way much more complete—albeit less spectacular—way then behaviorism), so understandably scientists (and philosophers) went a little overboard with it (kind of like evolutionary biology did in the 1920s).
I think a more fair viewpoint is to claim that behaviorism’s heyday in the 1950s has passed, but it still provides an excellent theoretical framework for some of human behavior, and along with cognitive science, is able to explain most of what we know about human behavior.
So if the claim is that I can get everything I have out of vim, most importantly being unbeatably fast text buffers, and I don’t need a suitcase full of config files, that’s very compelling.
Is that the promise of zed?
we retroactively assume that everyone just obviously adopts new technology, yet im sure there were tons and tons of people that retired rather than learning how computers worked when the PC revolution was happening.
I’m so glad we’re past that now and can join forces against a common enemy.
Thank you brother.
No one would call one a noob for not using Vim or Emacs. But they might for a different reason.
If someone blindly rejects even the notion of these tools without attempting to understand the underlying ideas behind them, that certainly suggests the dilettante nature of the person making the argument.
The idea of vim-motions is a beautiful, elegant, pragmatic model. Thinking that it is somehow outdated is a misapprehension. It is timeless just like musical notation - similarly it provides compositional grammar and universal language, and leads to developing muscle memory; and just like it, it can be intimidating but rewarding.
Emacs is grounded on another amazing idea - one of the greatest ideas in computer science, the idea of Lisp. And Lisp is just as everlasting, like math notation or molecular formulas — it has rigid structural rules and uniform syntax, there's compositional clarity, meta-reasoning and universal readability.
These tools remain in use today despite the abundance of "brand new technology" because time and again these concepts have proven to be highly practical. Nothing prevents vim from being integrated into new tools, and the flexibility of Lisp allows for seamless integration of new tools within the old-school engine.
I asked it two implement two bicubic filters, a high pass filter and a high shelf filter. Some context, using the gemini webapp it would split out the exact code I need with the interfaces I require one shot because this is truly trivial C++ code to write.
15 million tokens and an hour and a half later I now had a project that could not build, the filters were not implemented and my trust in AI agentic workflows broken.
It cost me nothing, I just reset the repo and I was watching youtube videos for that hour and a half.
Your mileage may vary and I’m very sure if this was golang or typescript it might have done significantly better, but even compared to the exact same model in a chat interface my experience was horrible.
I’m sticking to the slightly “worse” experience of using the chat interface which does give me significant improvements in productivity vs letting the agent burn money and time and not produce working code.
Emacs veterans simply rejected the entire concept of modality, without even trying to understand what it is about. Emacs is inherently a modal editor. Key-chords are stateful, Transient menus (i.e. Magit) are modals, completion is a modal, isearch, dired, calc, C-u (universal argument), recursive editing — these are all modals. What the idea of vim-motions offers is a universal, simplified, structured language to deal with modality, that's all.
Vim users on the other hand keep saying "there's no such thing as vim-mode". And to a certain degree they are right — no vim plugin outside of vim/neovim implements all the features — IdeaVim, VSCode vim plugins, Sublime, etc. - all of them are full of holes and glaring deficiencies. With one notable exception — Evil-mode in Emacs. It is so wonderfully implemented, you wouldn't even notice that it is a plugin, an afterthought. It really does feel like a baked-in, native feature of the editor.
There are no "wars" in our industry — pretty much only misunderstanding, misinterpretation and misuse of certain ideas. It's not even technological — who knows, maybe it's not even sociotechnological. People simply like talking past each other, defending different values without acknowledging they're optimizing for different things.
It's not Vim's, Emacs' or VSCode's fault that we suffer from identity investment - we spend hundreds of hours using one so it becomes our identity. We suffer from simplification impulse — we just love binary choices, we constantly have the nagging "which is better?" question, even when it makes little sense. We're predisposed to tribal belonging — having a common enemy creates in-group cohesion.
But real, experienced craftspeople... they just use whatever works best for them in a given context. That's what we all should strive for — discover old and new ideas, study them, identify good ones, borrow them, shelve the bad ones (who knows, maybe in a different context they may still prove useful). Most importantly, use whatever makes you and your teammates happy. It's far more important than being more productive or being decisively right. If thy stupid thing works, perhaps it ain't that stupid?
My previous employer didn't even allow me to use Vim until I learned it properly so it wouldn't affect my productivity. Why would using a cursor automatically make you better at something if it's just new to you and you are already an elite programmer according to this study?
No, it's the non-coding managers who vibe-coded a half-working prototype, not other users. And here, the Dunning-Kruger effect is at play - those non-coding types do not understand that AI is not working for them either.
Full disclosure: I do rely on vibe-coded jq lines in one-off scripts that will definitely not process more data after the single intended use, and this is where AI saves my time.
Позабыты хлопоты,
Остановлен бег,
Вкалывают роботы,
Счастлив человек!
Worries forgotten,
The treadmill doesn't run,
Robots are working,
Humans have fun!I'm sure nobody really reject the notion of LLMs but sure as hell do like to moan if the new technology doesn't absolutely perfect fit their own way of working. Does that make them any different than people wanting an editor which is intuitive to use? Nobody will ever know.
I don't know, people change their opinions all the time. I wasn't convinced about many ideas throughout my career, but I'm glad I found convincing arguments for some of them later.
> wanting an editor which is intuitive to use
Are you implying that Vim and Emacs are not?
Intuitive != Familiar. What feels unintuitive is often just unfamiliar. Vim's model actually feels pretty intuitive after the initial introduction. Emacs is pretty intuitive for someone who grokked Lisp basics - structural editing and REPL-driven development. The point is also subjective, for some people "intuitive editor" means "works like MS Word", but that's just one design philosophy, not an objective standard.
Tools that survive 30+ years and maintain passionate user bases must be doing something right, no?
> the new technology doesn't absolutely perfect fit their own way of working.
Emacs is extremely flexible, and thanks to that, I've rarely complained about new things not fitting my ways. I bend tools to fit my workflow if they don't align naturally — that's just the normal approach for a programmer.
I agree with you that AI dev tools are overhyped at the moment. But IDEs were, in fact, overhyped (to a lesser degree) in the past.
Could be the case for some, but I also think, that there is not much to climb on the learning curve for AI agents.
In my opinion, its more interesting, that the study also states, that AI capabilities may be comparatively lower on existing code:
> Our results also suggest that AI capabilities may be comparatively lower in settings with very high quality standards, or with many implicit requirements (e.g. relating to documentation, testing coverage, or linting/formatting) that take humans substantial time to learn.
This is consistent with my personal/pear experience. On existing code: You have to do try and error with AI until you get a 'good' result. Or highly modify AI generated code by yourself (which is often slower then writing it yourself from the beginning).
I think one would have to compare the difficulty level of tasks.
I speculate that on easy tasks, LLM's can do a great job based on their training data alone, so you'd experience a speedup regardless of your prompt engineering skill level. But on large codebases and for complex tasks, an LLM cannot stand on it's own legs, and the differentiator becomes the quality of the prompt.
I think you'd need not only expert programmers, but expert programmers who have become expert prompt engineers(you would need some kind of extensive system prompt describing how the large codebase works), and those don't really exist yet, I think.
I have the opposite impression! I find it's hard to think of any other tech product where users expect to master it with no training at all. I think people get tricked into believing they need no training because the tool uses natural language as the UI.
You learn how to use a spreadsheet or a word processor, how to drive a car, sail a boat, play a guitar. In the 90s there were courses that spent hours teaching users how to work a mouse and keyboard!
Of course you need to learn how to use a coding assistant as well, it just makes sense.
There has already been a million words written about how to use LLMs from people who don't really know how to use LLM's. Everyone is learning, there is a boom, you can make a fortune selling knowledge about LLM's whether you have that knowledge or not.
It's also possible for the user to be not using it right and that not be a value judgement on the user. We all suck at using new tools, that's part of learning.