zlacker

[return to "Zebra-Llama: Towards Efficient Hybrid Models"]
1. aditya+79[view] [source] 2025-12-06 21:45:45
>>mirrir+(OP)
Due to perverse incentives and the historical nature of models over-claiming accuracy, it's very hard to believe anything until it is open source and can be tested out

that being said, I do very much believe that computational efficiency of models is going to go up [correction] drastically over the coming months, which does pose interesting questions over nvidia's throne

*previously miswrote and said computational efficiency will go down

◧◩
2. credit+Q9[view] [source] 2025-12-06 21:54:12
>>aditya+79
Like this?

https://huggingface.co/amd/Zebra-Llama-8B-8MLA-24Mamba-SFT

◧◩◪
3. deepda+1j[view] [source] 2025-12-06 23:12:39
>>credit+Q9
> which does pose interesting questions over nvidia's throne...

> Zebra-Llama is a family of hybrid large language models (LLMs) proposed by AMD that...

Hmmm

[go to top]