There are still significant limitations, no amount of prompting will get current models to approach abstraction and architecture the way a person does. But I'm finding that these Gemini models are finally able to replace searches and stackoverflow for a lot of my day-to-day programming.
I find this sentiment increasingly worrisome. It's entirely clear that every last human will be beaten on code design in the upcoming years (I am not going to argue if it's 1 or 5 years away, who cares?)
I wished people would just stop holding on to what amounts to nothing, and think and talk more about what can be done in a new world. We need good ideas and I think this could be a place to advance them.
LOLLLLL. You see a good one-shot demo and imagine an upward line, I work with LLM assistance every day and see... an asymptote (which is only budged by exponential power expenditure). As they say in sailing, you'll never win the race by following the guy in front of you... which is exactly what every single LLM does: Do a sophisticated modeling of prior behavior. Innovation is not their strong suit LOL.
Perfect example- I cannot for the life of me get any LLM to stick with TDD building one feature at a time, which I know builds superior code (both as a human, and as an LLM!). Prompting will get them to do it for one or two cycles and then start regressing to the crap mean. Because that's what it was trained on. And it's the rare dev that can stick with TDD for whatever reason, so that's exactly what the LLM does. Which is absolutely subpar.
I'm not even joking, every single coding LLM would improve immeasurably if the model was refined to just 1) make a SINGLE test expectation, 2) watch it fail (to prove the test is valid), 3) build a feature, 4) work on it until the test passed, 5) repeat until app requirements are done. Anything already built that was broken by the new work would be highlighted by the unit test suite immediately and would be able to be fixed before the problem gets too complex.
LLM's also often "lose the plot", and that's not even a context limit problem, they just aren't conscious or have wills so their work eventually drifts off course or goes into these weird flip-flip states.
But sure, with an infinite amount of compute and an infinite amount of training data, anything is possible.