> The stream of PRs is coming from requests from the maintainers of the repo. We're experimenting to understand the limits of what the tools can do today and preparing for what they'll be able to do tomorrow. Anything that gets merged is the responsibility of the maintainers, as is the case for any PR submitted by anyone to this open source and welcoming repo. Nothing gets merged without it meeting all the same quality bars and with us signing up for all the same maintenance requirements.
> It is my opinion that anyone not at least thinking about benefiting from such tools will be left behind.
The read here is: Microsoft is so abuzz with excitement/panic about AI taking all software engineering jobs that Microsoft employees are jumping on board with Microsoft's AI push out of a fear of "being left behind". That's not the confidence inspiring the statement they intended it to be, it's the opposite, it underscores that this isn't the .net team "experimenting to understand the limits of what the tools" but rather the .net team trying to keep their jobs.
Like, I need to start smashing my face into a keyboard for 10000 hours or else I won't be able to use LLM tools effectively.
If LLM is this tool that is more intuitive than normal programming and adds all this productivity, then surely I can just wait for a bunch of others to wear themselves out smashing the faces on a keyboard for 10000 hours and then skim the cream off of the top, no worse for wear.
On the other hand, if using LLMs is a neverending nightmare of chaos and misery that's 10x harder than programming (but with the benefit that I don't actually have to learn something that might accidentally be useful), then yeah I guess I can see why I would need to get in my hours to use it. But maybe I could just not use it.
"Left behind" really only makes sense to me if my KPIs have been linked with LLM flavor aid style participation.
Ultimately, though, physics doesn't care about social conformity and last I checked the machine is running on physics.
Kinda like how word processing used to be an important career skill people put on their resumes. Assuming AI becomes as that commonplace and accessible, will it happen fast enough that devs who want good jobs can afford to just wait that out?
If LLM usage is easy then I can't be left behind because it's easy. I'll pick it up in a weekend.
If LLM usage is hard AND I can otherwise do the hard things that LLMs are doing then I can't be left behind if I just do the hard things.
Still the only way I can be left behind is if LLM usage is nonsense or the same as just doing it yourself AND the important thing is telling managers that you've been using it for a long time.
Is the superpower bamboozling management with story time?
Unless we're talking about hard things that I have up til now not been able to do. But do LLMs help with that in general?
This scenario breaks out of the hypothetical and the assertive and into the realm of the testable.
Provide for me the person who can use LLMs in a way that is hard but they are good at in order to do things which are hard but which they are currently bad at.
I will provide a task which is hard.
We can report back the result.
A PM using LLM to develop software product without DEV?
Also Im picking the problem. I have a few in mind but I would want to get the background of the person running the experiment first to ensure that the problem is something that we can expect to be hard for the person.