zlacker

[return to "Gemini 2.5 Pro Preview"]
1. segpha+J4[view] [source] 2025-05-06 15:34:48
>>meetpa+(OP)
My frustration with using these models for programming in the past has largely been around their tendency to hallucinate APIs that simply don't exist. The Gemini 2.5 models, both pro and flash, seem significantly less susceptible to this than any other model I've tried.

There are still significant limitations, no amount of prompting will get current models to approach abstraction and architecture the way a person does. But I'm finding that these Gemini models are finally able to replace searches and stackoverflow for a lot of my day-to-day programming.

◧◩
2. redox9+na[view] [source] 2025-05-06 16:02:43
>>segpha+J4
Making LLMs know what they don't know is a hard problem. Many attempts at making them refuse to answer what they don't know caused them to refuse to answer things they did in fact know.
◧◩◪
3. Volund+hk[view] [source] 2025-05-06 16:59:08
>>redox9+na
> Many attempts at making them refuse to answer what they don't know caused them to refuse to answer things they did in fact know.

Are we sure they know these things as opposed to being able to consistently guess correctly? With LLMs I'm not sure we even have a clear definition of what it means for it to "know" something.

◧◩◪◨
4. redox9+Jm[view] [source] 2025-05-06 17:13:44
>>Volund+hk
Yes. You could ask for factual information like "Tallest building in X place" and first it would answer it did not know. After pressuring it, it would answer with the correct building and height.

But also things where guessing was desirable. For example with a riddle it would tell you it did not know or there wasn't enough information. After pressuring it to answer anyway it would correctly solve the riddle.

The official llama 2 finetune was pretty bad with this stuff.

◧◩◪◨⬒
5. Volund+E21[view] [source] 2025-05-06 21:52:05
>>redox9+Jm
> After pressuring it, it would answer with the correct building and height.

And if you bully it enough on something nonsensical it'll give you a wrong answer.

You press it, and it takes a guess even though you told it not to, and gets it right, then you go "see it knew!". There's no database hanging out in ChatGPT/Claude/Gemini's weights with a list of cities and the tallest buildings. There's a whole bunch of opaque stats derived from the content it's been trained on that means that most of the time it'll come up with the same guess. But there's no difference in process between that highly consistent response to you asking the tallest building in New York and the one where it hallucinates a Python method that doesn't exist, or suggests glue to keep the cheese on your pizza. It's all the same process to the LLM.

[go to top]