https://huggingface.co/mistralai/Voxtral-Mini-4B-Realtime-26...
~9GB model.
Amazons transcription service is $0.024 per minute, pretty big difference https://aws.amazon.com/transcribe/pricing/
Why it should be Whisper v3? They even released an open model: https://huggingface.co/mistralai/Voxtral-Mini-4B-Realtime-26...
Don't be confused if it says "no microphone", the moment you click the record button it will request browser permission and then start working.
I spoke fast and dropped in some jargon and it got it all right - I said this and it transcribed it exactly right, WebAssembly spelling included:
> Can you tell me about RSS and Atom and the role of CSP headers in browser security, especially if you're using WebAssembly?
[^1]: https://www.wired.com/story/mistral-voxtral-real-time-ai-tra...
On the information density of languages: it is true that some languages have a more information dense textual representation. But all spoken languages convey about the same information in the same time. Which is not all that surprising, it just means that human brains have an optimal range at which they process information.
Further reading: Coupé, Christophe, et al. "Different Languages, Similar Encoding Efficiency: Comparable Information Rates across the Human Communicative Niche." Science Advances. https://doi.org/10.1126/sciadv.aaw2594
https://huggingface.co/nvidia/nemotron-speech-streaming-en-0...
https://github.com/m1el/nemotron-asr.cpp https://huggingface.co/m1el/nemotron-speech-streaming-0.6B-g...
[0] https://www.microsoft.com/en-us/research/wp-content/uploads/...
I used to use Dragon Dictation to draft my first novel, had to learn a 'language' to tell the rudimentary engine how to recognize my speech.
And then I discovered [1] and have been using it for some basic speech recognition, amazed at what a local model can do.
But it can't transcribe any text until I finish recording a file, and then it starts work, so very slow batches in terms of feedback latency cycles.
And now you've posted this cool solution which streams audio chunks to a model in infinite small pieces, amazing, just amazing.
Now if only I can figure out how to contribute to Handy or similar to do that Speech To Text in a streaming mode, STT locally will be a solved problem for me.
Handy – Free open source speech-to-text app https://github.com/cjpais/Handy
You could use their api (they have this snippet):
```curl -X POST "https://api.mistral.ai/v1/audio/transcriptions" \ -H "Authorization: Bearer $MISTRAL_API_KEY" \ -F model="voxtral-mini-latest" \ -F file=@"your-file.m4a" \ -F diarize=true \ -F timestamp_granularities="segment"```
In the api it took 18s to do a 20m audio file I had lying around where someone is reviewing a product.
There will, I'm sure, be ways of running this locally up and available soon (if they aren't in huggingface right now) but the API is $0.003/min. If it's something like 120 meetings (10 years of monthly ones) then it's roughly $20 if the meetings are 1hr each. Depending on whether they're 1 or 10 hours (or if they're weekly or monthly but 10 parallel sessions or something) then this might be a price you're willing to pay if you get the results back in an afternoon.
edit - their realtime model can be run with vllm, the batch model is not open
> We've worked hand-in-hand with the vLLM team to have production-grade support for Voxtral Mini 4B Realtime 2602 with vLLM. Special thanks goes out to Joshua Deng, Yu Luo, Chen Zhang, Nick Hill, Nicolò Lucchesi, Roger Wang, and Cyrus Leung for the amazing work and help on building a production-ready audio streaming and realtime system in vLLM.
https://huggingface.co/mistralai/Voxtral-Mini-4B-Realtime-26...
https://docs.vllm.ai/en/latest/serving/openai_compatible_ser...
https://github.com/pipecat-ai/nemotron-january-2026/
discovered through this twitter post:
I think it's nice to have specialized models for specific tasks that don't try to be generalists. Voxtral Transcript 2 is already extremely impressive, so imagine how much better it could be if it specialized in specific languages rather than cramming 14 languages into one model.
That said, generalist models definitely have their uses. I do want multilingual transcribing models to exist, I just also think that monolingual models could potentially achieve even better results for that specific language.
how does it compare to sparrow-1?
Impressive indeed. Works way better than the speech recognition I first got demo'ed in... 1998? I remember you had to "click" on the mic everytime you wanted to speak and, well, not only the transcription was bad, it was so bad that it'd try to interpret the sound of the click as a word.
It was so bad I told several people not to invest in what was back then a national tech darling:
https://en.wikipedia.org/wiki/Lernout_%26_Hauspie
That turned out to be a massive fraud.
But ...
> I tried speaking in 2 languages at once, and it picked it up correctly.
I'm a native french speaker and I tried with a very simple sentence mixing french and english:
"Pour un pistolet je prefere un red dot mais pour une carabine je prefere un ACOG" (aka "For a pistol I prefer a red dot but for a carbine I prefer an ACOG")
And instead I got this:
"Je prépare un redote, mais pour une carabine, je préfère un ACOG."
"Je prépare un redote ..." doesn't mean anything and it's not at all what I said.
I like it, it's impressive, but literally the first sentence I tried it got the first half entirely wrong.
In general there is a concept called the “curse of multilinguality”
Horrible speech recognition rate and very glitchy. Customers hated it, and lots of returns/complaints.
A few years later, L&H went bankrupt. And so did Articulate Systems.
https://applerescueofdenver.com/products-page/macintosh-to-p...
Wow, Voxtral is amazing. It will be great when someone stitches this up so an LLM starts thinking, researching for you, before you actually finish talking.
Like, create a conversation partner with sub 0.5 second latency. For example, you ask it a multi part questions and, as soon as you finish talking, it gives you the answer to the first part while it looks up the rest of the answer, then stitches it together so that there's no break.
The 2-3 second latency of existing voice chatbots is a non-started for most humans.
Well, I'm happy to report I integrated the new Mistral 3 and have been truly astounded by the results. I still am not a big fan of the model wrt factual information - it seems to be especially confident and especially wrong if left to it's own devices - but with http://phrasing.app I do most of the data aggregation myself and just use an LLM to format it. Mistral 3 was a drop-in replacement for 3x the quality (it was already very very good), 0% error rate for my use case (I had an issue for it occasionally going off the rails that was entirely solved), and sticks to my formatting guidelines perfectly (which even gpt-5-pro failed on). Plus it was somehow even cheaper.
I'm using Scribe v2 at the moment for TTS, but I'm very excited now to try integrating Voxtral Transcribe. The language support is a little lacking for my use cases, but I can always fall back to Scribe and amatorize the cost across languages. I actually was due to work on the transcription of phrasing very soon so I guess look forward to my (hopefully) glowing review on their next hn launch! XD