> It is my opinion that anyone not at least thinking about benefiting from such tools will be left behind.
The read here is: Microsoft is so abuzz with excitement/panic about AI taking all software engineering jobs that Microsoft employees are jumping on board with Microsoft's AI push out of a fear of "being left behind". That's not the confidence inspiring the statement they intended it to be, it's the opposite, it underscores that this isn't the .net team "experimenting to understand the limits of what the tools" but rather the .net team trying to keep their jobs.
If they weren't experimenting with AI and coding and took a more conservative approach, while other companies like Anthropic was running similar experiments, I'm sure HN would also be critiquing them for not keeping up as a stodgy big corporation.
As long as they are willing to take risks by trying and failing on their own repos, it's fine in my books. Even though I'd never let that stuff touch a professional github repo personally.
It's like the 2025 version not not using an IDE.
It's a powerful tool. You still need to know when to and when not to use it.
I think, we should not read too much into it. He is honestly exploring how much this tool can help him to resolve trivial issues. Maybe he was asked to do so by some of his bosses, but unlikely to fear the tool replacing him in the near future.
That's right on the mark. It will save you a little bit of work on tasks that aren't the bottleneck on your productivity, and disrupt some random tasks that may or may not be important.
It's makes so little difference that plenty of people in 2025 don't use an IDE, and looking at their performance from the outside one just can't tell.
Except that LLMs have less potential to improve your tasks and more potential to be disruptive.
https://www.theregister.com/2025/05/16/microsofts_axe_softwa...
Perhaps they were fired for failing to show enthusiasm for AI?
Like, I need to start smashing my face into a keyboard for 10000 hours or else I won't be able to use LLM tools effectively.
If LLM is this tool that is more intuitive than normal programming and adds all this productivity, then surely I can just wait for a bunch of others to wear themselves out smashing the faces on a keyboard for 10000 hours and then skim the cream off of the top, no worse for wear.
On the other hand, if using LLMs is a neverending nightmare of chaos and misery that's 10x harder than programming (but with the benefit that I don't actually have to learn something that might accidentally be useful), then yeah I guess I can see why I would need to get in my hours to use it. But maybe I could just not use it.
"Left behind" really only makes sense to me if my KPIs have been linked with LLM flavor aid style participation.
Ultimately, though, physics doesn't care about social conformity and last I checked the machine is running on physics.
Kinda like how word processing used to be an important career skill people put on their resumes. Assuming AI becomes as that commonplace and accessible, will it happen fast enough that devs who want good jobs can afford to just wait that out?
Even for writing tests, you have to proof-read every single line and triple check they didn't write a broken test. It's absolutely exhausting.
If LLM usage is easy then I can't be left behind because it's easy. I'll pick it up in a weekend.
If LLM usage is hard AND I can otherwise do the hard things that LLMs are doing then I can't be left behind if I just do the hard things.
Still the only way I can be left behind is if LLM usage is nonsense or the same as just doing it yourself AND the important thing is telling managers that you've been using it for a long time.
Is the superpower bamboozling management with story time?
Half of Microsoft (especially server-side) still runs on dotnet. And there are no real contributors outside of microsoft. So it is a vital project.
Law, civil service, academia and those who learnt enough LaTeX and HTML to understand text documents are in the minority.
Unless we're talking about hard things that I have up til now not been able to do. But do LLMs help with that in general?
This scenario breaks out of the hypothetical and the assertive and into the realm of the testable.
Provide for me the person who can use LLMs in a way that is hard but they are good at in order to do things which are hard but which they are currently bad at.
I will provide a task which is hard.
We can report back the result.
At the moment, I'd arguing doing much more than what say Apple is doing would be what is potentially catastrophic. Not doing anything would be minimally risky, and doing just a little bit would be the no risk play. I think Microsoft is making this mistake in a big way and will continue to lose market share over it and burn cash, albeit slowly since they are already giants. The point is, it's a giant that has momentum going in the opposite direction than what they want, and they are incapable of fixing the things causing it to go in that direction because their leadership has become delusional.
How can it be that people expect that pumping more energy into closed systems could do anything else than raise entropy? Because that's what it is. You attach GPU farms to your code base and make them pump code into it? You're pumping energy into a closed system. The result cannot be other than greater entropy.
The reason LLMs fail so often are not related to the fundamental of "garbage in, garbage out".
A PM using LLM to develop software product without DEV?
Also Im picking the problem. I have a few in mind but I would want to get the background of the person running the experiment first to ensure that the problem is something that we can expect to be hard for the person.