zlacker

[return to "Gemini 2.5 Pro Preview"]
1. segpha+J4[view] [source] 2025-05-06 15:34:48
>>meetpa+(OP)
My frustration with using these models for programming in the past has largely been around their tendency to hallucinate APIs that simply don't exist. The Gemini 2.5 models, both pro and flash, seem significantly less susceptible to this than any other model I've tried.

There are still significant limitations, no amount of prompting will get current models to approach abstraction and architecture the way a person does. But I'm finding that these Gemini models are finally able to replace searches and stackoverflow for a lot of my day-to-day programming.

◧◩
2. jstumm+jH[view] [source] 2025-05-06 19:23:17
>>segpha+J4
> no amount of prompting will get current models to approach abstraction and architecture the way a person does

I find this sentiment increasingly worrisome. It's entirely clear that every last human will be beaten on code design in the upcoming years (I am not going to argue if it's 1 or 5 years away, who cares?)

I wished people would just stop holding on to what amounts to nothing, and think and talk more about what can be done in a new world. We need good ideas and I think this could be a place to advance them.

◧◩◪
3. DanHul+2P[view] [source] 2025-05-06 20:14:54
>>jstumm+jH
> It's entirely clear that every last human will be beaten on code design in the upcoming years

Citation needed. In fact, I think this pretty clearly hits the "extraordinary claims require extraordinary evidence" bar.

◧◩◪◨
4. mark_l+di1[view] [source] 2025-05-07 00:17:42
>>DanHul+2P
I recently asked o4-mini-high for a system design of something moderately complicated and provided only about 4 paragraphs of prompt for what I wanted. I thought the design was very good, as was the Common Lisp code it wrote when I asked it to implement the design; one caveat though: it did a much better job implementing the design in Python than Common Lisp (where I had to correct the generated code).

My friend, we are living in a world of exponential increase of AI capability, at least for the last few years - who knows what the future will bring!

◧◩◪◨⬒
5. gtirlo+Io1[view] [source] 2025-05-07 01:34:23
>>mark_l+di1
That's your extraordinary evidence?
◧◩◪◨⬒⬓
6. mark_l+xp1[view] [source] 2025-05-07 01:43:36
>>gtirlo+Io1
Nope, just my opinion, derived from watching monthly and weekly exponential improvement over a few year period. I worked through at least two,AI winters since 1982, so current progress is good to see.
◧◩◪◨⬒⬓⬔
7. namari+7b2[view] [source] 2025-05-07 11:37:45
>>mark_l+xp1
Exponential over which metric exactly? Training dataset size, compute required yeah these have grown exponentially. But has any measure capability?

Because exponentially growing costs with linear or not measurable improvements is not a great trajectory.

[go to top]