zlacker

[parent] [thread] 6 comments
1. vessen+(OP)[view] [source] 2026-02-03 16:15:38
3B active parameters, and slightly worse than GLM 4.7. On benchmarks. That's pretty amazing! With better orchestration tools being deployed, I've been wondering if faster, dumber coding agents paired with wise orchestrators might be overall faster than using the say opus 4.5 on the bottom for coding. At least we might want to deploy to these guys for simple tasks.
replies(3): >>doctor+e4 >>markab+95 >>Steven+zQ3
2. doctor+e4[view] [source] 2026-02-03 16:33:06
>>vessen+(OP)
Time will tell. All this stuff will get more adoption when Anthropic, Google and OpenAI raise prices.
replies(1): >>Alifat+mj
3. markab+95[view] [source] 2026-02-03 16:36:07
>>vessen+(OP)
It's getting a lot easier to do this using sub-agents with tools in Claude. I have a fleet of Mastra agents (TypeScript). I use those agents inside my project as CLI tools to do repetitive tasks that gobble tokens such as scanning code, web search, library search, and even SourceGraph traversal.

Overall, it's allowed me to maintain more consistent workflows as I'm less dependent on Opus. Now that Mastra has introduced the concept of Workspaces, which allow for more agentic development, this approach has become even more powerful.

replies(1): >>solumu+if
◧◩
4. solumu+if[view] [source] [discussion] 2026-02-03 17:18:06
>>markab+95
Are you just exposing mastra cli commands to Claude Code in md context? I’d love you to elaborate on this if you have time.
replies(1): >>adrian+Hp
◧◩
5. Alifat+mj[view] [source] [discussion] 2026-02-03 17:34:19
>>doctor+e4
They can only raise prices as long as people buy their subscriptions / pay for their api. The Chinese labs are closing in on the SOTA models (I would say they are already there) and offer insane cheap prices for their subscriptions. Vote with your wallet.
◧◩◪
6. adrian+Hp[view] [source] [discussion] 2026-02-03 17:57:46
>>solumu+if
Seconded!
7. Steven+zQ3[view] [source] 2026-02-04 16:19:02
>>vessen+(OP)
I tried Coder yesterday with OpenCode... didn't have a great experience. Got caught in a loop reading a single file over and over again until the context filled up. GLM 4.7 has been crushing it so far. One's thinking and other isn't so that's part of it I'm sure.
[go to top]