zlacker

[parent] [thread] 0 comments
1. ethmar+(OP)[view] [source] 2026-02-04 21:55:28
If encoding more learned languages and grammars and dictionaries makes the model size bigger, it will also increase latency. Try running a 1B model locally and then try to run a 500B model on the same hardware. You'll notice that latency has rather a lot to do with model size.
[go to top]