zlacker

[return to "Gemini 2.5 Pro Preview"]
1. segpha+J4[view] [source] 2025-05-06 15:34:48
>>meetpa+(OP)
My frustration with using these models for programming in the past has largely been around their tendency to hallucinate APIs that simply don't exist. The Gemini 2.5 models, both pro and flash, seem significantly less susceptible to this than any other model I've tried.

There are still significant limitations, no amount of prompting will get current models to approach abstraction and architecture the way a person does. But I'm finding that these Gemini models are finally able to replace searches and stackoverflow for a lot of my day-to-day programming.

◧◩
2. ableto+gL[view] [source] 2025-05-06 19:51:20
>>segpha+J4
I feel like there are two realities right now where half the people say LLM doesn't do anything well and there is another half that's just using LLM to the max. Can everybody preface what stack they are using or what exactly they are doing so we can better determine why it's not working for you? Maybe even include what your expectations are? Maybe even tell us what models you're using? How are you prompting the models exactly?

I find for 90% of the things I'm doing LLM removes 90% of the starting friction and let me get to the part that I'm actually interested in. Of course I also develop professionally in a python stack and LLMs are 1 shotting a ton of stuff. My work is standard data pipelines and web apps.

I'm a tech lead at faang adjacent w/ 11YOE and the systems I work with are responsible for about half a billion dollars a year in transactions directly and growing. You could argue maybe my standards are lower than yours but I think if I was making deadly mistakes the company would have been on my ass by now or my peers would have caught them.

Everybody that I work with is getting valuable output from LLMs. We are using all the latest openAI models and have a business relationship with openAI. I don't think I'm even that good at prompting and mostly rely on "vibes". Half of the time I'm pointing the model to an example and telling it "in the style of X do X for me".

I feel like comments like these almost seem gaslight-y or maybe there's just a major expectation mismatch between people. Are you expecting LLMs to just do exactly what you say and your entire job is to sit back prompt the LLM? Maybe I'm just use to shit code but I've looked at many code bases and there is a huge variance in quality and the average is pretty poor. The average code that AI pumps out is much better.

◧◩◪
3. oparin+DU[view] [source] 2025-05-06 20:53:40
>>ableto+gL
I've had the opposite experience. Despite trying various prompts and models, I'm still searching for that mythical 10x productivity boost others claim.

I use it mostly for Golang and Rust, I work building cloud infrastructure automation tools.

I'll try to give some examples, they may seem overly specific but it's the first things that popped into my head when thinking about the subject.

Personally, I found that LLMs consistently struggle with dependency injection patterns. They'll generate tightly coupled services that directly instantiate dependencies rather than accepting interfaces, making testing nearly impossible.

If I ask them to generate code and also their respective unit tests, they'll often just create a bunch of mocks or start importing mock libraries to compensate for their faulty implementation, rather than fixing the underlying architectural issues.

They consistently fail to understand architecture patterns, generating code where infrastructure concerns bleed into domain logic. When corrected, they'll make surface level changes while missing the fundamental design principle of accepting interfaces rather than concrete implementations, even when explicitly instructed that it should move things like side-effects to the application edges.

Despite tailoring prompts for different models based on guides and personal experience, I often spend 10+ minutes correcting the LLM's output when I could have written the functionality myself in half the time.

No, I'm not expecting LLMs to replace my job. I'm expecting them to produce code that follows fundamental design principles without requiring extensive rewriting. There's a vast middle ground between "LLMs do nothing well" and the productivity revolution being claimed.

That being said, I'm glad it's working out so well for you, I really wish I had the same experience.

◧◩◪◨
4. ableto+uV[view] [source] 2025-05-06 21:00:10
>>oparin+DU
> I use it mostly for Golang and Rust

I'm starting to suspect this is the issue. Neither of these languages are in the top 5 languages so there is probably less to train on. It'd be interesting to see if this improves over time or if the gap between the languages become even more intense as it becomes favorable to use a language simply because LLMs are so much better at it.

There are a lot of interesting discussions to be had here:

- if the efficiency gains are real and llms don't improve in lesser used languages, one should expect that we might observe that companies that chose to use obscure languages and tech stacks die out as they become a lot less competitive against stacks that are more compatible with llms

- if the efficiency gains are real this might disincentivize new language adoption and creation unless the folks training models somehow address this

- languages like python with higher output acceptance rates are probably going to become even more compatible with llms at a faster rate if we extrapolate that positive reinforcement is probably more valuable than negative reinforcement for llms

◧◩◪◨⬒
5. Implic+Y81[view] [source] 2025-05-06 22:42:39
>>ableto+uV
I'm also suspecting this has a lot to do with the dichotomy between the "omg llms are amazing at code tasks" and "wtf are these people using these llms for it's trash" takes.

As someone who works primarily within the Laravel stack, in PHP, the LLM's are wildly effective. That's not to say there aren't warts - but my productivity has skyrocketed.

But it's become clear that when you venture into the weeds of things that aren't very mainstream you're going to get wildly more hallucinations and solutions that are puzzling.

Another observation is that I believe that when you start getting outside of your expertise you're likely going to have a correlating amount of 'waste' time spent where the LLM is spitting out solutions that an expert in the domain would immediately recognize as problematic but the non-expert will see and likely reason that it seems reasonable/or, worse, not even look at the solution and just try to use it.

100% of the time that I've tried to get Claude/Gemini/ChatGPT to "one shot" a whole feature or refactor it's been a waste of time and tokens. But when I've spent even a minimal amount of energy to focus it in on the task, curate the context and then approach? Tremendously effective most times. But this also requires me to do enough mental work that I probably have an idea of how it should work out which primes my capability to parse the proposed solutions/code and pick up the pieces. Another good flow is to just prompt the LLM (in this case, Claude Code, or something with MCP/filesystem access) with the feature/refactor/request asking it to draw up the initial plan of implementation to feed to itself. Then iterate on that as needed before starting up a new session/context with that plan and hitting it one item at a time, while keeping a running {TASK_NAME}_WORKBOOK.md (that you task the llm to keep up to date with the relevant details) and starting a new session/context for each task/item on the plan, using the workbook to get the new sessions up to speed.

Also, this is just a hunch, but I'm generally a nocturnal creature and tend to be working in the evening into early mornings. Once 8am PST rolls around I really feel like Claude (in particular) just turns into mush. Responses get slower but it seems it loses context where it otherwise wouldn't start getting off topic/having to re-read files it should already have in context. (Note; I'm pretty diligent about refreshing/working with the context and something happens in the 'work' hours to make it terrible)

I'd imagine we're going to end up with language specific llms (though I have no idea, just seems logical to me) that a 'main' model pushes tasks/tool usage to. We don't need our "coding" LLM's to also be proficient on oceanic tidal patterns and 1800's boxing history. Those are all parameters that could have been better spent on the code.

[go to top]