There are still significant limitations, no amount of prompting will get current models to approach abstraction and architecture the way a person does. But I'm finding that these Gemini models are finally able to replace searches and stackoverflow for a lot of my day-to-day programming.
I find this sentiment increasingly worrisome. It's entirely clear that every last human will be beaten on code design in the upcoming years (I am not going to argue if it's 1 or 5 years away, who cares?)
I wished people would just stop holding on to what amounts to nothing, and think and talk more about what can be done in a new world. We need good ideas and I think this could be a place to advance them.
Isn’t software engineering a lot more than just writing code? And I mean like, A LOT more?
Informing product roadmaps, balancing tradeoffs, understanding relationships between teams, prioritizing between separate tasks, pushing back on tech debt, responding to incidents, it’s a feature and not a bug, …
I’m not saying LLMs will never be able to do this (who knows?), but I’m pretty sure SWEs won’t be the only role affected (or even the most affected) if it comes to this point.
Where am I wrong?
Ask yourself how many of these things still matter if you can tell an AI to tweak something and it can rewrite your entire codebase in a few minutes. Why would you have to prioritize, just tell the AI everything you have to change and it will do it all at once. Why would you have tech debt, that's something that accumulates because humans can only make changes on a limited scope at a mostly fixed rate. LLMs can already respond to feedback about bugs, features and incidents, and can even take advice on balancing tradeoffs.
Many of the things you describe are organizational principles designed to compensate for human limitations.