QWEN CHAT GitHub Hugging Face ModelScope Kaggle DEMO DISCORD
Introduction Today, we are excited to announce the release of Qwen3, the latest addition to the Qwen family of large language models. Our flagship model, Qwen3-235B-A22B, achieves competitive results in benchmark evaluations of coding, math, general capabilities, etc., when compared to other top-tier models such as DeepSeek-R1, o1, o3-mini, Grok-3, and Gemini-2.5-Pro. Additionally, the small MoE model, Qwen3-30B-A3B, outcompetes QwQ-32B with 10 times of activated parameters, and even a tiny model like Qwen3-4B can rival the performance of Qwen2.
Only 32K context without yarn, and with yarn Qwen 2.5 was kinda hit/miss.
No 32B base model. Is that a middle finger to the Deepseek distils?
It really feels like “more of qwen 2.5/1.5” architecture wise. I was hoping for better attention mechanisms, QAT, a bitnet test, logit distillation… something new other than some training data optimizations and more scale.
I’m actually more medium on this!
Only 32K context without yarn, and with yarn Qwen 2.5 was kinda hit/miss.
No 32B base model. Is that a middle finger to the Deepseek distils?
It really feels like “more of qwen 2.5/1.5” architecture wise. I was hoping for better attention mechanisms, QAT, a bitnet test, logit distillation… something new other than some training data optimizations and more scale.
There actually is a 32b dense
Yeah, but only an Instruct version. They didn’t leave any 32B base model like they did for for the 30B MoE.
That could be intentional, to stop anyone from building on their 32B dense model.
Huh. I didn’t realize that thanks. Lame that they would hold back the one that is the biggest size most consumers would ever run.
It could be an oversight, no one has answered yet. Not many people asking either, heh.