In fact, since I don't need to do low-thinking tasks like writing boilerplate or repetitive tests, I find my thinking ratio is actually higher than when I write code normally.
That said architectural problems have been also been less difficult, just for the simple fact that research and prototyping has become faster and cheaper.
I find the best uses, for at least my self, are smaller parts of my workflow where I'm not going to learn anything from doing it: - build one to throw away: give me a quick prototype to get stakeholder feedback - straightforward helper functions: I have the design and parameters planned, just need an implementation that I can review - tab-completion code-gen - If I want leads for looking into something (libraries, tools) and Googling isn't cutting it
When I'm just programming, I spend a lot more time working through a single idea, or a single function. Its much less tiring.
The point they are making is that using AI tools makes it a lot harder for them to keep up the discipline to think hard.
This may or may not be true for everyone.
My observation: I've always had that "sound." I don't know or care much about what that implies. I will admit I'm now deliberately avoiding em dashs, whereas I was once an enthusiastic user of them.
> By “thinking hard,” I mean encountering a specific, difficult problem and spending multiple days just sitting with it to overcome it.
The "thinking hard" I do with an LLM is more like management thinking. Its chaotic and full of conversations and context switches. Its tiring, sure. But I'm not spending multiple days contemplating a single idea.
The "thinking hard" I do over multiple days with a single problem is more like that of a scientist / mathematician. I find myself still thinking about my problem while I'm lying in bed that night. I'm contemplating it in the shower. I have little breakthroughs and setbacks, until I eventually crack it or give up.
Its different.
And then also there’s all the non-systems stuff - what is actually feasible, what’s most valuable etc. Less “fun”, but still lots of potential for thinking.
I guess my main point is there is still lots to think about even post-LLM, but the real challenge is making it as “fun” or as easily useful as it was pre-LLM.
I think local code architecture was a very easy domain for “optimality” that is actually tractable and the joy that comes with it, and LLMs are harmful to that, but I don’t think there’s nothing to replace it with.
Except without the reward of an intellectual high afterwards.
thinking is tiring and life is complicated, the tool makes it easy to slip into bad habits and bad habits are hard to break even when you recognise its a bad habit.
Many people are too busy/lazy/self-unaware to evaluate their behaviour to recognise a bad habit.
I just changed employers recently in part due to this: dealing with someone that appears to now spend his time coercing LLM's to give the answers he wants, and becoming deaf to any contradictions. LLMs are very effective at amplifying the Reality Distortion Field for those that live in them. LLMs are replacing blog posts for this purpose.
[1]: https://www.jocrf.org/how-clients-use-the-analytical-reasoni...
With AI we can set high bars and do complex original stuff. Obviously boilerplate and common patterns are slop slap without much thinking. That's why you branch into new creative territory. The challenge then becomes visualising the mental map of modular pieces all working nicely together at the right time to achieve your original intent.
Okay, for you that is new - post-LLM.
For me, pre-LLM I thought about all those things as well as the code itself.
IOW, I thought about even more things. Now you (if I understand your claim correctly) think only about those higher level things, unencumbered by stuff like implementation misalignments, etc. By definition alone, you are thinking less hard.
------------------------
[1] Many times the thinking about code itself acted as a feedback mechanism for all those things. If thinking about the code itself never acted as a feedback mechanism to your higher thought processes then ... well, maybe you weren't doing it the way I was.
Before LLMs once I was done with the design choices as you mention them - risks, constraints, technical debt, alternatives, possibilities, ... I cooked up a plan, and with that plan, I could write the code without having to think hard. Actually writing code was relaxing for me, and I feel like I need some relax between hard thinking sessions.
Nowadays we leave the code writing to LLMs because they do it way faster than a human could, but then have to think hard to check if the code LLM wrote satisfies the requirements.
Also reviewing junior developers' PRs became harder with them using LLMs. Juniors powered by AI are more ambitious and more careless. AI often suggests complicated code the juniors themselves don't understand and they just see that it works and commit it. Sometimes it suggests new library dependencies juniors wouldn't think of themselves, and of course it's the senior's role to decide whether the dependency is warranted and worthy of being included. Average PR length also increased. And juniors are working way faster with AI so we spend more time doing PR reviews.
I feel like my whole work somehow from both sides collapsed to reviewing code = from one side the code that my AI writes, from the other side the code that juniors' AI wrote, the amount of which has increased. And even though I like reviewing code, it feels like the hardest part of my profession and I liked it more when it was balanced with tasks which required less thinking...