zlacker

[return to "Claude Code is suddenly everywhere inside Microsoft"]
1. paxys+Xv[view] [source] 2026-02-02 15:18:53
>>Anon84+(OP)
For one reason or another everyone seems to be sleeping on Gemini. I have been exclusively using Gemini 3 Flash to code these days and it stands up right alongside Opus and others while having a much smaller, faster and cheaper footprint. Combine it with Antigravity and you're basically using a cheat code.
◧◩
2. raluse+Hy[view] [source] 2026-02-02 15:34:45
>>paxys+Xv
I think Gemini is an excellent model, it's just not a particularly great agent. One of the reasons is that its code output is often structured in a way that looks like it's answering a question, rather than generating production code. It leaves comments everywhere, which are often numbered (which not only is annoying, but also only makes sense if the numbering starts within the frame of reference of the "question" it's "answering").

It's also just not as good at being self-directed and doing all of the rest of the agent-like behaviors we expect, i.e. breaking down into todolists, determining the appropriate scope of work to accomplish, proper tool calling, etc.

◧◩◪
3. freedo+Cz[view] [source] 2026-02-02 15:40:02
>>raluse+Hy
Yeah, you may have nailed it. Gemini is a good model, but in the Gemini CLI with a prompt like, "I'd like to add <feature x> support. What are my options? Don't write any code yet" it will proceed to skip right past telling me my options and will go ahead an implement whatever it feels like. Afterward it will print out a list of possible approaches and then tell you why it did the one it did.

Codex is the best at following instructions IME. Claude is pretty good too but is a little more "creative" than codex at trying to re-interpret my prompt to get at what I "probably" meant rather than what I actually said.

◧◩◪◨
4. michae+hO[view] [source] 2026-02-02 16:47:11
>>freedo+Cz
Can you (or anyone) explain how this might be? The "agent" is just a passthrough for the model, no? How is one CLI/TUI tool better than any other, given the same model that it's passing your user input to?

I am familiar with copilot cli (using models from different providers), OpenCode doing the same, and Claude with just the \A models, but if I ask all 3 the same thing using the same \A model, I SHOULD be getting roughly the same output, modulo LLM nondeterminism, right?

◧◩◪◨⬒
5. taylor+4v1[view] [source] 2026-02-02 20:11:47
>>michae+hO
maybe different preparatory "system" prompts?
[go to top]