There are still significant limitations, no amount of prompting will get current models to approach abstraction and architecture the way a person does. But I'm finding that these Gemini models are finally able to replace searches and stackoverflow for a lot of my day-to-day programming.
I find this sentiment increasingly worrisome. It's entirely clear that every last human will be beaten on code design in the upcoming years (I am not going to argue if it's 1 or 5 years away, who cares?)
I wished people would just stop holding on to what amounts to nothing, and think and talk more about what can be done in a new world. We need good ideas and I think this could be a place to advance them.
Can you point to _any_ evidence to support that human software development abilities will be eclipsed by LLMs other than trying to predict which part of the S-curve we're on?
LLM sees pagination, it does pagination. After all LLM is an algorithm that calculates probability of the next word in a sequence of words, nothing less and nothing more. LLM does not think or feel, even though people believe in this saying thank you and using polite words like "please". LLM generates text on the base of what it was presented. That's why it will happily invent research that does not exist, create a review of a product that does not exist, invent a method that does not exist in a given programming language. And so on.