zlacker

[return to "Qwen3-Coder-Next"]
1. skhame+l9[view] [source] 2026-02-03 16:38:51
>>daniel+(OP)
It’s hard to elaborate just how wild this model might be if it performs as claimed. The claims are this can perform close to Sonnet 4.5 for assisted coding (SWE bench) while using only 3B active parameters. This is obscenely small for the claimed performance.
◧◩
2. cirrus+ab[view] [source] 2026-02-03 16:45:25
>>skhame+l9
If it sounds too good to be true…
◧◩◪
3. Der_Ei+I11[view] [source] 2026-02-03 20:16:51
>>cirrus+ab
It literally always is. HN Thought DeepSeek and every version of Kimi would finally dethrone the bigger models from Anthropic, OpenAI, and Google. They're literally always wrong and average knowledge of LLMs here is shockingly low.
◧◩◪◨
4. cmrdpo+Zk1[view] [source] 2026-02-03 21:52:52
>>Der_Ei+I11
Nobody has been saying they'd be dethroned. We're saying they're often "good enough" for many use cases, and that they're doing a good job of stopping the Big Guys from creating a giant expensive moat around their businesses.

Chinese labs are acting as a disruption against Altman etcs attempt to create big tech monopolies, and that's why some of us cheer for them.

[go to top]