zlacker

[return to "Zebra-Llama: Towards Efficient Hybrid Models"]
1. aditya+79[view] [source] 2025-12-06 21:45:45
>>mirrir+(OP)
Due to perverse incentives and the historical nature of models over-claiming accuracy, it's very hard to believe anything until it is open source and can be tested out

that being said, I do very much believe that computational efficiency of models is going to go up [correction] drastically over the coming months, which does pose interesting questions over nvidia's throne

*previously miswrote and said computational efficiency will go down

◧◩
2. daniel+u9[view] [source] 2025-12-06 21:49:54
>>aditya+79
I think you mean computational efficiency will go _up_ in the future. To your last point: Jevons paradox might apply.
◧◩◪
3. aditya+za[view] [source] 2025-12-06 22:00:06
>>daniel+u9
yup that's what I meant!, Jevon's paradox applies to resource usage in general and not towards a specific companies dominance

if computational efficiency goes up (thanks for the correction), and CPU inference becomes viable for most practical applications, GPUs (or accelerators) themselves may be unnecessary for most practical functions

◧◩◪◨
4. atq211+6e[view] [source] 2025-12-06 22:29:37
>>aditya+za
Discrete GPUs still have an advantage in memory bandwidth. Though this might push platforms like laptops towards higher bandwidths, which would be nice.
[go to top]