There are still significant limitations, no amount of prompting will get current models to approach abstraction and architecture the way a person does. But I'm finding that these Gemini models are finally able to replace searches and stackoverflow for a lot of my day-to-day programming.
The models are very impressive. But issues like these still make me feel they are still more pattern matching (although there's also some magic, don't get me wrong) but not fully reasoning over everything correctly like you'd expect of a typical human reasoner.
I assume that it's trickier than it seems as it hasn't happened yet.
While we have millions of LOCs to train models on, we don’t have that for ASTs. Also, except for LISP and some macro supporting languages, the AST is not usually stable at all (it’s an internal implementation detail). It’s also way too sparse because you need a pile of tokens for even simple operations. The Scala AST for 1 + 2 for example probably looks like this,
Apply(Select(scala, Select(math, Select(Int, Select(+)))), New(Literal(1)), Seq(This, New(Literal(2))) etc etc
which is way more tokens than 1 + 2. You could possibly use a token per AST operation but then you can’t train on human language anymore and you need a new LLM per PL, and you can’t solve problem X in language Y based on a solution from language Z.