zlacker

[parent] [thread] 2 comments
1. ttoino+(OP)[view] [source] 2026-02-03 16:33:20
Cerebras is already 200-800 tps, do you need even faster ?
replies(1): >>overfe+gd
2. overfe+gd[view] [source] 2026-02-03 17:28:01
>>ttoino+(OP)
Yes! I don't try to read agent tokens as they are generated, so if code generation decreases from 1 minute to 6 seconds, I'll be delighted. I'll even accept 10s -> 1s speedups. Considering how often I've seen agents spin wheels with different approaches, faster is always better, until models can 1-shot solutions without the repeated "No, wait..." / "Actually..." thinking loops
replies(1): >>pqtyw+B51
◧◩
3. pqtyw+B51[view] [source] [discussion] 2026-02-03 21:13:41
>>overfe+gd
> until models can 1-shot solutions without the repeated "No, wait..." / "Actually..." thinking loops

That would imply they'd have to be actually smarter than humans, not just faster and be able to scale infinitely. IMHO that's still very far away..

[go to top]