zlacker

[return to "Gemini 2.5 Pro Preview"]
1. segpha+J4[view] [source] 2025-05-06 15:34:48
>>meetpa+(OP)
My frustration with using these models for programming in the past has largely been around their tendency to hallucinate APIs that simply don't exist. The Gemini 2.5 models, both pro and flash, seem significantly less susceptible to this than any other model I've tried.

There are still significant limitations, no amount of prompting will get current models to approach abstraction and architecture the way a person does. But I'm finding that these Gemini models are finally able to replace searches and stackoverflow for a lot of my day-to-day programming.

◧◩
2. yousif+0C[view] [source] 2025-05-06 18:48:23
>>segpha+J4
The opposite problem is also true. I was using it to edit code I had that was calling the new openai image API, which is slightly different from the dalle API. But Gemini was consistently "fixing" the OpenAI call even when I explained clearly not to do that since I'm using a new API design etc. Claude wasn't having that issue.

The models are very impressive. But issues like these still make me feel they are still more pattern matching (although there's also some magic, don't get me wrong) but not fully reasoning over everything correctly like you'd expect of a typical human reasoner.

◧◩◪
3. toomuc+zD[view] [source] 2025-05-06 18:56:42
>>yousif+0C
It seems like the fix is straightforward (check the output against a machine readable spec before providing it to the user), but perhaps I am a rube. This is no different than me clicking through a search result to the underlying page to verify the veracity of the search result surfaced.
◧◩◪◨
4. disgru+sE[view] [source] 2025-05-06 19:02:08
>>toomuc+zD
Why coding agents et al don't make use of the AST through LSP is a question I've been asking myself since the first release of GitHub copilot.

I assume that it's trickier than it seems as it hasn't happened yet.

◧◩◪◨⬒
5. xmcqdp+U82[view] [source] 2025-05-07 11:11:52
>>disgru+sE
My guess is that it doesn’t work for several reasons.

While we have millions of LOCs to train models on, we don’t have that for ASTs. Also, except for LISP and some macro supporting languages, the AST is not usually stable at all (it’s an internal implementation detail). It’s also way too sparse because you need a pile of tokens for even simple operations. The Scala AST for 1 + 2 for example probably looks like this,

Apply(Select(scala, Select(math, Select(Int, Select(+)))), New(Literal(1)), Seq(This, New(Literal(2))) etc etc

which is way more tokens than 1 + 2. You could possibly use a token per AST operation but then you can’t train on human language anymore and you need a new LLM per PL, and you can’t solve problem X in language Y based on a solution from language Z.

◧◩◪◨⬒⬓
6. disgru+Ph5[view] [source] 2025-05-08 14:23:14
>>xmcqdp+U82
> While we have millions of LOCs to train models on, we don’t have that for ASTs

Agreed, but that could be generated if it made a big difference.

I do completely take your points around the instability of the AST and the length, those are important facets to this question.

However, what I (and probably others) want is something much, much simpler. Merely (I love not having to implement this so I can use this word ;) ) check the code with the completion done (so what the AI proposes) and weight down completions that increase the number of issues found from the type-checking/linting/lsp process.

Honestly, just killing the ones that don't parse properly would be very helpful (I've noticed that both Copilot and the DBX completers are particularly bad at this one).

[go to top]