You list what look like quite greenfield projects, very self-contained, and very data science oriented. These are quite significantly uncharacteristic of software engineering in the large. They have nothing to do with interacting systems each with 100,000s lines of code.
Software engineers working on large systems (eg., many micro-services, data integration layers, etc.) are working on very different problems. Debugging a microservice system isn't something an LLM can do -- it has no ability, e.g., to trace a request through various apis from, eg., a front-end into a backend layer, into some db, to be transfered to some other db etc.
This was all common enough stuff for software engineers 20 years ago, and was part of some of my first jobs.
A very large amount of this pollyanna-LLM view, which isnt by jnr software engineers, is by data scientists who are extremely unfamiliar with software engineering.
Every codebase I listed was over 10 years old and had millions of lines of code. Instagram is probably the world's largest and most used python codebase, and the camera software I worked on was 13 years old and had millions of lines of c++ and Java. I haven't worked on many self contained things in my career.
LLMs can help with these things if you know how to use them.
Jobs comprise different tasks, some more amenable to LLMs than others. My view is that where scepticism exists amongst professional senior engineers, its probably well-founded and grounded in the kinds of tasks that they are engaged with.
I'd imagine everyone in the debate is using LLMs to some degree; and that it's mostly about what productivity factor we imagine exists.
That's more a function of your tooling more than of your LLM. If you provide your LLM with tool use facilities to do that querying, i don't see the reason why it can't go off and perform that investigation - but i haven't tried it yet, off the back of this comment though, it's now high on my todo list. I'm curious.
TFA covers a similar case:
>> But I’ve been first responder on an incident and fed 4o — not o4-mini, 4o — log transcripts, and watched it in seconds spot LVM metadata corruption issues on a host we’ve been complaining about for months. Am I better than an LLM agent at interrogating OpenSearch logs and Honeycomb traces? No. No, I am not.
For the first 10 years of my career I was a contractor walking into national and multinational orgs with large existing codebases, working within pre-existing systems not merely "codebases". Both hardware systems (e.g., new 4g networking devices just as they were released) and distributed software systems.
I can think of many daily tasks I had across these roles that would not be very significantly speed-up by an LLM. I can also see that there's a few that would be. I also shudder to think what time would be wasted by me trying to learn 4g networking from LLM summarisation of new docs; and spending as much time working from improperly summarised code (etc.).
I don't think snr software engineers are so scepticial here that they're saying LLMs are not, locally, helpful to their jobs. The issue is how local this help seems to be.
They are busy doing their work and prefer their competitors (other developers) to not use these tools.
Someone else responds that video of the author actually using the tools would be more convincing.
Then you respond with essentially “no one wants to convince you and they’re too busy to try”.
Now if you misspoke and you’d like to change what you said originally to “many AI users do want to convince AI skeptics to use AI, but they only have enough time to write blog posts not publish any more convincing evidence”, then sure that could be the case.
But that ain’t what you said. And there’s no way to interpret what you said that way.