Yes, but it takes a bit to build up that weighted list and it can be quite hefty to parse. So they may be building this behind the scenes currently. As another commenter pointed out, being able to correct a chunk and send it back to help the algorithm would be a nice feature here.
Side note: I'm dealing with this issue at the moment - if anyone has a good resource on reducing the workload I'd love a link!.
Ed:spelling
https://news.ycombinator.com/item?id=23322321
at 33 second mark https://twitter.com/jamescham/status/1265512829806927873
"foaming at the mouth" was never even close to being uttered on the radio. I'm guessing the (flawed) model inserted that part because of the proximity to the word "needle" and "assistance".
Maybe? No idea.. this website it totally fucked.
The quality is currently limited by Google's API. I am working on getting some pre-trained models implemented, but voice processing is not my speciality as a software engineer.
I do NOT want to spread misinformation nor do we want to unjustly slander anyone. Tonight I will be adding a disclaimer mentioning the limitations of our service and will make sure it is forefront on the website.
Hopefully we can create a model which can deliver better results.
Typically you would use/train a LM for your domain or specifically for your dataset.