zlacker

[return to "Gemini 2.5 Pro Preview"]
1. segpha+J4[view] [source] 2025-05-06 15:34:48
>>meetpa+(OP)
My frustration with using these models for programming in the past has largely been around their tendency to hallucinate APIs that simply don't exist. The Gemini 2.5 models, both pro and flash, seem significantly less susceptible to this than any other model I've tried.

There are still significant limitations, no amount of prompting will get current models to approach abstraction and architecture the way a person does. But I'm finding that these Gemini models are finally able to replace searches and stackoverflow for a lot of my day-to-day programming.

◧◩
2. jstumm+jH[view] [source] 2025-05-06 19:23:17
>>segpha+J4
> no amount of prompting will get current models to approach abstraction and architecture the way a person does

I find this sentiment increasingly worrisome. It's entirely clear that every last human will be beaten on code design in the upcoming years (I am not going to argue if it's 1 or 5 years away, who cares?)

I wished people would just stop holding on to what amounts to nothing, and think and talk more about what can be done in a new world. We need good ideas and I think this could be a place to advance them.

◧◩◪
3. fullst+WQ1[view] [source] 2025-05-07 07:44:18
>>jstumm+jH
Code design? Perhaps. But how are you going to inform a model of every sprint meeting, standup, decision, commit, feature, and spec that is part of an existing product? It's no longer a problem of intelligence or correctness, its a problem of context, and I DON'T mean context window. Imagine onboarding your companies best programmer to a new project - even they will have dozens of questions and need at least a week to make productive input to the project. Even then, they are working with a markedly smaller scope of what the whole project is. How is this process translatable to an LLM? I'm not sure.
◧◩◪◨
4. valent+GT1[view] [source] 2025-05-07 08:16:15
>>fullst+WQ1
Yeah, this is the problem.

The LLM needs vast amounts of training data. And those data needs to have context that goes beyond a simple task and also way beyond a mere description of the end goal.

To just give one example: in a big company, teams will build software differently depending on the relations between teams and people. So basically, you would need to train the LLM based on the company, the "air" or social atmosphere and the code and other things related to it. It's doable but "in a few years" or so is a stretch. Even a few decades seems ambitious.

[go to top]