Releasing a 24B-A2B model on the same day as Qwen3.5-35B-A3B is NOT great timing π₯² Qwen (@Alibaba_Qwen) π Introducing the Qwen 3.5 Medium Model Series Qwen3.5-Flash Β· Qwen3.5-35B-A3B Β· Qwen3.5-122B-A10B Β· Qwen3.5-27B β¨ More intelligence, less compute. β’ Qwen3.5-35B-A3B now surpasses Qwen3-235B-A22B-2507 and Qwen3-VL-235B-A22B β a reminder that better architecture, data quality, and RL can move intelligence forward, not just bigger parameter counts. β’ Qwen3.5-122B-A10B and 27B continue narrowing the gap between medium-sized and frontier models β especially in more complex agent scenarios. β’ Qwen3.5-Flash is the hosted production version aligned with 35B-A3B, featuring: β 1M context length by default β Official built-in tools π Hugging Face: huggingface.co/collections/Qβ¦ π ModelScope: modelscope.cn/collections/Qwβ¦ π Qwen3.5-Flash API: modelstudio.console.alibabacβ¦ Try in Qwen Chat π Flash: chat.qwen.ai/?models=qwen3.5β¦ 27B: chat.qwen.ai/?models=qwen3.5β¦ 35B-A3B: chat.qwen.ai/?models=qwen3.5β¦ 122B-A10B: chat.qwen.ai/?models=qwen3.5β¦ Would love to hear what you build with it. β https://nitter.net/Alibaba_Qwen/status/2026339351530188939#m
β View original post on X β @maximelabonne, 2026-02-24 17:01 UTC

Leave a Reply