zlacker

[return to "Watching AI drive Microsoft employees insane"]
1. diggan+L1[view] [source] 2025-05-21 11:18:44
>>laiysb+(OP)
Interesting that every comment has "Help improve Copilot by leaving feedback using the or buttons" suffix, yet none of the comments received any feedback, either positive or negative.

> This seems like it's fixing the symptom rather than the underlying issue?

This is also my experience when you haven't setup a proper system prompt to address this for everything an LLM does. Funniest PRs are the ones that "resolves" test failures by removing/commenting out the test cases, or change the assertions. Googles and Microsofts models seems more likely to do this than OpenAIs and Anthropics models, I wonder if there is some difference in their internal processes that are leaking through here?

The same PR as the quote above continues with 3 more messages before the human seemingly gives up:

> please take a look

> Your new tests aren't being run because the new file wasn't added to the csproj

> Your added tests are failing.

I can't imagine how the people who have to deal with this are feeling. It's like you have a junior developer except they don't even read what you're telling them, and have 0 agency to understand what they're actually doing.

Another PR: https://github.com/dotnet/runtime/pull/115732/files

How are people reviewing that? 90% of the page height is taken up by "Check failure", can hardly see the code/diff at all. And as a cherry on top, the unit test has a comment that say "Test expressions mentioned in the issue". This whole thing would be fucking hilarious if I didn't feel so bad for the humans who are on the other side of this.

◧◩
2. yubble+A9[view] [source] 2025-05-21 12:26:34
>>diggan+L1
This field (SE - when I started out back in late 80s) was enjoyable. Now it has become toxic, from the interview process, to imitating "big tech" songs and dances by small fry companies, and now this. Is there any joy left in being a professional software developer?
◧◩◪
3. diggan+Ra[view] [source] 2025-05-21 12:36:53
>>yubble+A9
> This field (SE - when I started out back in late 80s) was enjoyable. Now it has become toxic

I feel the same way today, but I got started around 2012 professionally. I wonder how much of this is just our fading optimism after seeing how shit really works behind the scenes, and how much the industry itself is responsible for it. I know we're not the only two people feeling this way either, but it seems all of us have different timescales from when it turned from "enjoyable" to "get me out of here".

◧◩◪◨
4. salawa+1f[view] [source] 2025-05-21 13:11:53
>>diggan+Ra
My issue stems from the attitudes of the people we're doing it for. I started out doing it for humanity. To bring the bicycle for the mind to everyone.

Then one day I woke up and realized the ones paying me were also the ones using it to run over or do circles around everyone else not equipped with a bicycle yet; and were colluding to make crippled bicycles that'd never liberate the masses as much as they themselves had been previously liberated; bicycles designed to monitor, or to undermine their owner, or more disgustingly, their "licensee".

So I'm not doing it anymore. I'm not going to continue making deliberately crippled, overly complex, legally encumbered bicycles for the mind, purely intended as subjects for ARR extraction.

◧◩◪◨⬒
5. ecocen+0l[view] [source] 2025-05-21 13:51:59
>>salawa+1f
It's hard to find anything wrong with your conclusions except that you're leaving out the part where they're trying to automate our contributions to devalue our skills. I'm surprised there isn't a movement to halt the use of AI for certain tasks in software development on the same level as the active resistance from doctors against socialized medicine in the US. These expensive toys will inevitably introduce catastrophic level bugs and security vulnerabilities into critical infrastructure software. Right now, most of Microsoft's product offerings, like GitHub and Office, are critical infrastructure software.
◧◩◪◨⬒⬓
6. ryandr+7D[view] [source] 2025-05-21 15:31:33
>>ecocen+0l
> I'm surprised there isn't a movement to halt the use of AI for certain tasks in software development on the same level as the active resistance from doctors against socialized medicine in the US.

This is also shocking to me. Especially here on HN! Every tech CEO on earth is salivating over AI coding because they want it to devalue and/or replace their expensive human software developers. Whether or not that will actually happen, that's the purpose of building all of these "agentic" coding tools. And here we are, dumbass software engineers, cheerleading for and building the means of our own destruction! We downplay it with bullshit like "Oh, but AI is just a way to augment our work, it will never really replace us or lower our compensation!" Wild how excited we all are about this.

◧◩◪◨⬒⬓⬔
7. blibbl+mc1[view] [source] 2025-05-21 18:44:16
>>ryandr+7D
> This is also shocking to me. Especially here on HN!

this website is owned and operated by a VC, who build fortunes off exploiting these people

"workers and oppressed peoples of all countries, unite!" is the last thing I'd expect to see here

[go to top]