zlacker

[return to "Gemini 2.5 Pro Preview"]
1. segpha+J4[view] [source] 2025-05-06 15:34:48
>>meetpa+(OP)
My frustration with using these models for programming in the past has largely been around their tendency to hallucinate APIs that simply don't exist. The Gemini 2.5 models, both pro and flash, seem significantly less susceptible to this than any other model I've tried.

There are still significant limitations, no amount of prompting will get current models to approach abstraction and architecture the way a person does. But I'm finding that these Gemini models are finally able to replace searches and stackoverflow for a lot of my day-to-day programming.

◧◩
2. jstumm+jH[view] [source] 2025-05-06 19:23:17
>>segpha+J4
> no amount of prompting will get current models to approach abstraction and architecture the way a person does

I find this sentiment increasingly worrisome. It's entirely clear that every last human will be beaten on code design in the upcoming years (I am not going to argue if it's 1 or 5 years away, who cares?)

I wished people would just stop holding on to what amounts to nothing, and think and talk more about what can be done in a new world. We need good ideas and I think this could be a place to advance them.

◧◩◪
3. DanHul+2P[view] [source] 2025-05-06 20:14:54
>>jstumm+jH
> It's entirely clear that every last human will be beaten on code design in the upcoming years

Citation needed. In fact, I think this pretty clearly hits the "extraordinary claims require extraordinary evidence" bar.

◧◩◪◨
4. aposm+zs3[view] [source] 2025-05-07 18:52:13
>>DanHul+2P
I had a coworker making very similar claims recently - one of the more AI-positive engineers on my team (a big part of my department's job is assessing new/novel tech for real-world value vs just hype). I was stunned when I actually saw the output of this process, which was a multi-page report describing the architecture of an internal system that arguably needed an overhaul. I try to keep an open mind, but this report was full of factual mistakes, misunderstandings, and when it did manage to accurately describe aspects of this system's design/architecture, it made only the most surface-level comments about boilerplate code and common idioms, without displaying any understanding of the actual architecture or implications of the decisions being made. Not only this coworker but several other more junior engineers on my team proclaimed this to be an example of the amazing advancement of AI ... which made me realize that the people claiming that LLMs have some superhuman ability to understand and design computer systems are those who have never really understood it themselves. In many cases these are people who have built their careers on copying and pasting code snippets from stack overflow, etc., and now find LLMs impressive because they're a quicker and easier way to do the same.
[go to top]