In less than 5 minutes Claude created code that: - encapsulated the api call - modeled the api response using Typescript - created a re-usable and responsive ui component for the card (including a load state) - included it in the right part of the page
Even if I typed at 200wpm I couldn't produce that much code from such a simple prompt.
I also had similar experiences/gains refactoring back-end code.
This being said, there are cases in which writing the code yourself is faster than writing a detailed enough prompt, BUT those cases are becoming exception with new LLM iteration. I noticed that after the jump from Claude 3.7 to Claude 4 my prompts can be way less technical.
Afterwards I make sure the LLM passes all the tests before I spend my time to review the code.
I find this process keeps the iterations count low for review -> prompt -> review.
I personally love writing code with an LLM. I’m a sloppy typist but love programming. I find it’s a great burnout prevention.
For context: node.js development/React (a very LLM friendly stack.)