No it doesn't. Typing speed is never the bottleneck for an expert.
As an offline database of Google-tier knowledge, LLM's are useful. Though current LLM tech is half-baked, we need:
a) Cheap commodity hardware for running your own models locally. (And by "locally" I mean separate dedicated devices, not something that fights over your desktop's or laptop's resources.)
b) Standard bulletproof ways to fine-tune models on your own data. (Inference is already there mostly with things like llama.cpp, finetuning isn't.)
How could that possibly be true!? Seems like it'd be the same as suggesting being constrained to analog writing utensils wouldn't bottleneck the process of publishing a book or research paper. At the very least such a statement implies that people with ADHD can't be experts.
(I'll assume you're not joking, because your post is ridiculous enough to look like sarcasm.)
The answer is because programmers read code 10 times more (and think about code 100 times more) than they write it.
In less than 5 minutes Claude created code that: - encapsulated the api call - modeled the api response using Typescript - created a re-usable and responsive ui component for the card (including a load state) - included it in the right part of the page
Even if I typed at 200wpm I couldn't produce that much code from such a simple prompt.
I also had similar experiences/gains refactoring back-end code.
This being said, there are cases in which writing the code yourself is faster than writing a detailed enough prompt, BUT those cases are becoming exception with new LLM iteration. I noticed that after the jump from Claude 3.7 to Claude 4 my prompts can be way less technical.
How many times have you read a story card and by the time you finished reading it you thought "It's an easy task, should take me 1 hour of work to write the code and tests"?
In my experience, in most of those cases the AI can do the same amount of code writing in under 10 minutes, leaving me the other 50 minutes to review the code, make/ask for any necessary adjustments, and move on to another task.
I remember hearing somewhere that humans have a limited capacity in terms of number of decisions made in a day, and it seems to fit here: If I'm writing the code myself, I have to make several decisions on every line of code, and that's mentally tiring, so I tend to stop and procrastinate frequently.
If an LLM is handling a lot of the details, then I'm just making higher-level decisions, allowing me to make more progress.
Of course this is totally speculation and theories like this tend to be wrong, but it is at least consistent with how I feel.
While it's often a luxury, I'd much rather work on code I wrote than code somebody else wrote.
My ability to review and understand intent behind code isn't a primarily bottleneck to me actually efficiently reviewing code when it's requested of me, the primary bottleneck is being notified at the right time that I have a waiting request to review code.
If compilers were never a bottleneck, why would we ever try to make them faster? If build tools were never a bottleneck, why would we ever optimize those? These are all just some of the things that can stand between the identification of a problem and producing a solution for it.
Afterwards I make sure the LLM passes all the tests before I spend my time to review the code.
I find this process keeps the iterations count low for review -> prompt -> review.
I personally love writing code with an LLM. I’m a sloppy typist but love programming. I find it’s a great burnout prevention.
For context: node.js development/React (a very LLM friendly stack.)