zlacker

[return to "Gemini 2.5 Pro Preview"]
1. segpha+J4[view] [source] 2025-05-06 15:34:48
>>meetpa+(OP)
My frustration with using these models for programming in the past has largely been around their tendency to hallucinate APIs that simply don't exist. The Gemini 2.5 models, both pro and flash, seem significantly less susceptible to this than any other model I've tried.

There are still significant limitations, no amount of prompting will get current models to approach abstraction and architecture the way a person does. But I'm finding that these Gemini models are finally able to replace searches and stackoverflow for a lot of my day-to-day programming.

◧◩
2. jstumm+jH[view] [source] 2025-05-06 19:23:17
>>segpha+J4
> no amount of prompting will get current models to approach abstraction and architecture the way a person does

I find this sentiment increasingly worrisome. It's entirely clear that every last human will be beaten on code design in the upcoming years (I am not going to argue if it's 1 or 5 years away, who cares?)

I wished people would just stop holding on to what amounts to nothing, and think and talk more about what can be done in a new world. We need good ideas and I think this could be a place to advance them.

◧◩◪
3. ssalaz+gg1[view] [source] 2025-05-06 23:55:42
>>jstumm+jH
I code with multiple LLMs every day and build products that use LLM tech under the hood. I dont think we're anywhere near LLMs being good at code design. Existing models make _tons_ of basic mistakes and require supervision even for relatively simple coding tasks in popular languages, and its worse for languages and frameworks that are less represented in public sources of training data. I am _frequently_ having to tell Claude/ChatGPT to clean up basic architectural and design defects. Theres no way I would trust this unsupervised.

Can you point to _any_ evidence to support that human software development abilities will be eclipsed by LLMs other than trying to predict which part of the S-curve we're on?

◧◩◪◨
4. Arthur+uG1[view] [source] 2025-05-07 05:21:50
>>ssalaz+gg1
I run a software development company with dozens of staff across multiple countries. Gemini has us to the point where we can actually stop hiring for certain roles and staff have been informed they must make use of these tools or they are surplus to requirements. At the current rate of improvement I believe we will be operating on far less staff in 2 years time.
◧◩◪◨⬒
5. realus+dT1[view] [source] 2025-05-07 08:10:57
>>Arthur+uG1
I'd be worried instead of happy in your case, it means your lunch is getting eaten as a company.

Personally I'm in a software company where this new LLM wave didn't do much of a difference.

◧◩◪◨⬒⬓
6. Arthur+HW1[view] [source] 2025-05-07 08:51:21
>>realus+dT1
Not at all. We dont care whether the software is written by a machine or by a human. If the machine does it cheaper, to a better, more consistent standard, then its win for us.
◧◩◪◨⬒⬓⬔
7. realus+2Y1[view] [source] 2025-05-07 09:09:29
>>Arthur+HW1
You don't care but that's what the market is paying you for. You aren't just replacing developers, you are replacing yourself.

Cheaper organisations will be able to compete with you which couldn't before and will drive your revenue down.

◧◩◪◨⬒⬓⬔⧯
8. Arthur+tY1[view] [source] 2025-05-07 09:14:20
>>realus+2Y1
That might be the case if we were an organisation that resisted change and were not actively pursuing reducing our staff count via AI, but it isnt. In the AI era our company will thrive because we are no longer constrained by needing to find a specific type of human talent that can build the complicated systems we develop.
[go to top]