zlacker

[return to "Gemini 2.5 Pro Preview"]
1. segpha+J4[view] [source] 2025-05-06 15:34:48
>>meetpa+(OP)
My frustration with using these models for programming in the past has largely been around their tendency to hallucinate APIs that simply don't exist. The Gemini 2.5 models, both pro and flash, seem significantly less susceptible to this than any other model I've tried.

There are still significant limitations, no amount of prompting will get current models to approach abstraction and architecture the way a person does. But I'm finding that these Gemini models are finally able to replace searches and stackoverflow for a lot of my day-to-day programming.

◧◩
2. jstumm+jH[view] [source] 2025-05-06 19:23:17
>>segpha+J4
> no amount of prompting will get current models to approach abstraction and architecture the way a person does

I find this sentiment increasingly worrisome. It's entirely clear that every last human will be beaten on code design in the upcoming years (I am not going to argue if it's 1 or 5 years away, who cares?)

I wished people would just stop holding on to what amounts to nothing, and think and talk more about what can be done in a new world. We need good ideas and I think this could be a place to advance them.

◧◩◪
3. ssalaz+gg1[view] [source] 2025-05-06 23:55:42
>>jstumm+jH
I code with multiple LLMs every day and build products that use LLM tech under the hood. I dont think we're anywhere near LLMs being good at code design. Existing models make _tons_ of basic mistakes and require supervision even for relatively simple coding tasks in popular languages, and its worse for languages and frameworks that are less represented in public sources of training data. I am _frequently_ having to tell Claude/ChatGPT to clean up basic architectural and design defects. Theres no way I would trust this unsupervised.

Can you point to _any_ evidence to support that human software development abilities will be eclipsed by LLMs other than trying to predict which part of the S-curve we're on?

◧◩◪◨
4. Arthur+uG1[view] [source] 2025-05-07 05:21:50
>>ssalaz+gg1
I run a software development company with dozens of staff across multiple countries. Gemini has us to the point where we can actually stop hiring for certain roles and staff have been informed they must make use of these tools or they are surplus to requirements. At the current rate of improvement I believe we will be operating on far less staff in 2 years time.
◧◩◪◨⬒
5. nnnnna+1K1[view] [source] 2025-05-07 06:13:46
>>Arthur+uG1
[flagged]
◧◩◪◨⬒⬓
6. Arthur+WK1[view] [source] 2025-05-07 06:22:26
>>nnnnna+1K1
[flagged]
◧◩◪◨⬒⬓⬔
7. numpad+iL2[view] [source] 2025-05-07 15:03:07
>>Arthur+WK1
IMO this is completely "based". Delivering customer values and making money off of it is own thing, and software companies collectively being a social club and an place for R&D is another - technically a complete tangent to it. It doesn't always matter how sausages came to be on the served plate. It might be the Costco special that CEO got last week and dumped into the pot. It's none of your business to make sure that doesn't happen. The customer knows. It's consensual. Well maybe not. But none of your business. Literally.

The field of software engineering might be doomed if everyone worked like this user and replaced programmers with machines, or not, but those are sort of above his paygrade. AI destroying the symbiotic relationship between IT companies and its internal social clubs is a societal issue, more macro-scale issues than internal regulation mechanisms of free market economies are expected to solve.

I guess my point is, I don't know this guy or his company is real or not, but it passes my BS detector and I know for the fact that a real medium sized company CEOs are like this. This is technically what everyone should aspire to be. If you think that's morally wrong and completely utterly wrong, congratulations for your first job.

◧◩◪◨⬒⬓⬔⧯
8. nnnnna+k93[view] [source] 2025-05-07 17:01:22
>>numpad+iL2
Turning this into a moral discussion is besides the point, a point that both of you missed in your efforts to be based, although the moral discussion is also interesting—but I'll leave that be for now. It appears as if I stepped on ArthurStack's toes, but I'll give you the benefit of the doubt and reply.

My point actually has everything to do with making money. Making money is not a viable differentiator in and of itself. You need to put in work on your desired outcomes (or get lucky, or both) and the money might follow. My problem is that directives such as "software developers need to use tool x" is an _input_ with, at best, a questionable causal relationship to outcome y.

It's not about "social clubs for software developers", but about clueless execs. Now, it's quite possible that he's put in that work and that the outcomes are attributable to that specific input, but judging by his replies here I wouldn't wager on it. Also, as others have said, if that's the case, replicating their business model just got a whole lot easier.

> This is technically what everyone should aspire to be

No, there are other values besides maximizing utility.

◧◩◪◨⬒⬓⬔⧯▣
9. Arthur+xC3[view] [source] 2025-05-07 19:51:59
>>nnnnna+k93
> My problem is that directives such as "software developers need to use tool x" is an _input_ with, at best, a questionable causal relationship to outcome y.

Total drivel. It is beyond question that the use of the tools increases the capabilities and output of every single developer in the company in whatever task they are working on, once they understand how to use them. That is why there is the directive.

[go to top]