zlacker

[parent] [thread] 3 comments
1. vessen+(OP)[view] [source] 2026-02-03 16:22:21
Yep. But this is like 10x faster; 3B active parameters.
replies(1): >>ttoino+L2
2. ttoino+L2[view] [source] 2026-02-03 16:33:20
>>vessen+(OP)
Cerebras is already 200-800 tps, do you need even faster ?
replies(1): >>overfe+1g
◧◩
3. overfe+1g[view] [source] [discussion] 2026-02-03 17:28:01
>>ttoino+L2
Yes! I don't try to read agent tokens as they are generated, so if code generation decreases from 1 minute to 6 seconds, I'll be delighted. I'll even accept 10s -> 1s speedups. Considering how often I've seen agents spin wheels with different approaches, faster is always better, until models can 1-shot solutions without the repeated "No, wait..." / "Actually..." thinking loops
replies(1): >>pqtyw+m81
◧◩◪
4. pqtyw+m81[view] [source] [discussion] 2026-02-03 21:13:41
>>overfe+1g
> until models can 1-shot solutions without the repeated "No, wait..." / "Actually..." thinking loops

That would imply they'd have to be actually smarter than humans, not just faster and be able to scale infinitely. IMHO that's still very far away..

[go to top]