zlacker

[return to "Voxtral Transcribe 2"]
1. janals+Gy[view] [source] 2026-02-04 17:41:39
>>meetpa+(OP)
I noticed that this model is multilingual and understands 14 languages. For many use cases, we probably only need a single language, and the extra 13 are simply adding extra latency. I believe there will be a trend in the coming years of trimming the fat off of these jack of all trades models.

https://aclanthology.org/2025.findings-acl.87/

◧◩
2. rainco+j61[view] [source] 2026-02-04 20:11:23
>>janals+Gy
Imagine if ChatGPT started like this and thought they should trim coding abilities from their language model because most people don't code.
◧◩◪
3. ethmar+ds1[view] [source] 2026-02-04 21:51:56
>>rainco+j61
They've already done the inverse and trimmed non-coding abilities from their language model: https://openai.com/index/introducing-gpt-5-2-codex/. There's already precedent for creating domain-specific models.

I think it's nice to have specialized models for specific tasks that don't try to be generalists. Voxtral Transcript 2 is already extremely impressive, so imagine how much better it could be if it specialized in specific languages rather than cramming 14 languages into one model.

That said, generalist models definitely have their uses. I do want multilingual transcribing models to exist, I just also think that monolingual models could potentially achieve even better results for that specific language.

[go to top]