zlacker

[parent] [thread] 9 comments
1. samtp+(OP)[view] [source] 2025-05-14 20:28:58
I've pretty clearly seen the critical thinking ability of coworkers who depend on AI too much sharply decline over the past year. Instead of taking 30 seconds to break down the problem and work through assumptions, they immediately copy/paste into an LLM and spit back what it tells them.

This has lead to their abilities stalling while their output seemingly goes up. But when you look at the quality of their output, and their ability to get projects over the last 10% or make adjustments to an already completed project without breaking things, it's pretty horrendous.

replies(3): >>Ethery+e1 >>andy99+72 >>jobs_t+Vd
2. Ethery+e1[view] [source] 2025-05-14 20:37:08
>>samtp+(OP)
My observations align with this pretty closely. I have a number of colleagues who I wager are largely using LLM-s, both by changes in coding style and how much they suddenly add comments, and I can't help but feel a noticeable drop in the quality of the output. Issues that should clearly have no business making it to code review are now regularly left for others to catch, it often feels like they don't even look at their own diffs. What to make of it, I'm not entirely sure. I do think there are ways LLM-s can help us work in better ways, but they can also lead to considerably worse outcomes.
replies(1): >>jimbok+12
◧◩
3. jimbok+12[view] [source] [discussion] 2025-05-14 20:40:33
>>Ethery+e1
Just replace your colleagues with the LLMs they are using. You will reduce costs with no decrease in the quality of work.
4. andy99+72[view] [source] 2025-05-14 20:41:04
>>samtp+(OP)
I think lack of critical thinking is the root cause, not a symptom. I think pretty much everyone uses LLMs these days, but you can tell who sees the output and considers it "done" vs who uses LLM output as an input to their own process.
replies(1): >>mystif+96
◧◩
5. mystif+96[view] [source] [discussion] 2025-05-14 21:05:47
>>andy99+72
I mean, I can tell that I'm having this problem and my critical thinking skills are otherwise typically quite sharp.

At work I've inherited a Kotlin project and I've never touched Kotlin or android before, though I'm an experienced programmer in other domains. ChatGPT has been guiding me through what needs to be done. The problem I'm having is that it's just too damn easy to follow its advice without checking. I might save a few minutes over reading the docs myself, but I don't get the context the docs would have given me.

I'm a 'Real Programmer' and I can tell that the code is logically sound and self-consistent. The code works and it's usually rewritten so much as to be distinctly my code and style. But still it's largely magical. If I'm doing things the less-correct way, I wouldn't really know because this whole process has led me to some pretty lazy thinking.

On the other hand, I very much do not care about this project. I'm very sure that it will be used just a few times and never see the light of day again. I don't expect to ever do android development again after this, either. I think lazy thinking and farming the involved thinking out to ChatGPT is acceptable here, but it's clear how easily this could become a very bad habit.

I am making a modest effort to understand what I'm doing. I'm also completely rewriting or ignoring the code the AI gives me, it's more of an API reference and example. I can definitely see how a less-seasoned programmer might get suckered into blindly accepting AI code and iterating prompts until the code works. It's pretty scary to think about how the coming generations of programmers are going to experience and conceptualize programming.

6. jobs_t+Vd[view] [source] 2025-05-14 22:05:43
>>samtp+(OP)
As someone who vibe codes at times (and is a professional programmer), I'm curious how yall go about resisting this? Just avoid LLMs entirely and do everything by hand? Very rigorously go over any LLM-generated code before committing?

It certainly is hard when I'm say writing unit tests to avoid the temptation to throw it into Cursor and prompt until it works.

replies(2): >>brecke+je >>samtp+3n
◧◩
7. brecke+je[view] [source] [discussion] 2025-05-14 22:10:17
>>jobs_t+Vd
Set a budget. Get rate limited. Let the experience remind you how much time you’re actually wasting letting the model write good looking but buggy code, versus just writing code responsibly.
◧◩
8. samtp+3n[view] [source] [discussion] 2025-05-14 23:29:02
>>jobs_t+Vd
I resist it by realizing that while LLM are good at things like decoding obtuse error messages, having them write too much of your code leads to a project becoming almost impossible to maintain or add to. And there are many cases where you spend more time trying to correct errors from the LLM than if you were to slow down and inspect the code yourself.
replies(1): >>christ+mv
◧◩◪
9. christ+mv[view] [source] [discussion] 2025-05-15 00:56:44
>>samtp+3n
If you don’t commit its output until it’s in a shape that is maintainable and acceptable to you— just like with any other pair programming exercise— you’ll be fine. I do think your skills will atrophy over time, though. I’m not sure what the right balance is, here.
replies(1): >>AndyNe+VG
◧◩◪◨
10. AndyNe+VG[view] [source] [discussion] 2025-05-15 03:17:59
>>christ+mv
My honest opinion is that some of my skills are atrophying, and some of them are increasing.

I have managed a python app for a long time due to it being part of a much larger set of services I manage. I've never been particularly comfortable with it.

I am easily learning, and understanding the python much much better.

I think I'm atrophying in a lot of syntax, and typing automatic things.

It doesn't really feel straight forward that it's one or the other.

[go to top]