zlacker

[parent] [thread] 0 comments
1. yanosh+(OP)[view] [source] 2026-02-04 15:21:02
So does that mean that LLM inference could go down significantly in price and/or context length would dramatically increase?
[go to top]