There are still significant limitations, no amount of prompting will get current models to approach abstraction and architecture the way a person does. But I'm finding that these Gemini models are finally able to replace searches and stackoverflow for a lot of my day-to-day programming.
I find this sentiment increasingly worrisome. It's entirely clear that every last human will be beaten on code design in the upcoming years (I am not going to argue if it's 1 or 5 years away, who cares?)
I wished people would just stop holding on to what amounts to nothing, and think and talk more about what can be done in a new world. We need good ideas and I think this could be a place to advance them.
Can you point to _any_ evidence to support that human software development abilities will be eclipsed by LLMs other than trying to predict which part of the S-curve we're on?
it becomes a question of how much you believe it's all just training data, and how much you believe the LLM's got pieces that are composable. I've given the question on the link as an interview questions and had humans been unable to give as through an answer (which I chose to believe is due to specialization on elsewhere in the stack). So we're already at a place where some human software development abilities have been eclipsed on some questions. So then even if the underlying algorithms don't improve, and they just ingest more training data, then it doesn't seem like a total guess as to what part of the S-curve we're on - the number of questions for software development that LLMs are able to successfully answer will continue to increase.