zlacker

[parent] [thread] 4 comments
1. ttoino+(OP)[view] [source] 2026-02-03 16:19:31
Cerebras already has GLM 4.7 in the code plans
replies(1): >>vessen+G
2. vessen+G[view] [source] 2026-02-03 16:22:21
>>ttoino+(OP)
Yep. But this is like 10x faster; 3B active parameters.
replies(1): >>ttoino+r3
◧◩
3. ttoino+r3[view] [source] [discussion] 2026-02-03 16:33:20
>>vessen+G
Cerebras is already 200-800 tps, do you need even faster ?
replies(1): >>overfe+Hg
◧◩◪
4. overfe+Hg[view] [source] [discussion] 2026-02-03 17:28:01
>>ttoino+r3
Yes! I don't try to read agent tokens as they are generated, so if code generation decreases from 1 minute to 6 seconds, I'll be delighted. I'll even accept 10s -> 1s speedups. Considering how often I've seen agents spin wheels with different approaches, faster is always better, until models can 1-shot solutions without the repeated "No, wait..." / "Actually..." thinking loops
replies(1): >>pqtyw+291
◧◩◪◨
5. pqtyw+291[view] [source] [discussion] 2026-02-03 21:13:41
>>overfe+Hg
> until models can 1-shot solutions without the repeated "No, wait..." / "Actually..." thinking loops

That would imply they'd have to be actually smarter than humans, not just faster and be able to scale infinitely. IMHO that's still very far away..

[go to top]