AI Dynamics

Global AI News Aggregator

Unfortunate timing: 24B-A2B release coincides with Qwen3.5 launch

Releasing a 24B-A2B model on the same day as Qwen3.5-35B-A3B is NOT great timing πŸ₯² Qwen (@Alibaba_Qwen) πŸš€ Introducing the Qwen 3.5 Medium Model Series Qwen3.5-Flash Β· Qwen3.5-35B-A3B Β· Qwen3.5-122B-A10B Β· Qwen3.5-27B ✨ More intelligence, less compute. β€’ Qwen3.5-35B-A3B now surpasses Qwen3-235B-A22B-2507 and Qwen3-VL-235B-A22B β€” a reminder that better architecture, data quality, and RL can move intelligence forward, not just bigger parameter counts. β€’ Qwen3.5-122B-A10B and 27B continue narrowing the gap between medium-sized and frontier models β€” especially in more complex agent scenarios. β€’ Qwen3.5-Flash is the hosted production version aligned with 35B-A3B, featuring: – 1M context length by default – Official built-in tools πŸ”— Hugging Face: huggingface.co/collections/Q… πŸ”— ModelScope: modelscope.cn/collections/Qw… πŸ”— Qwen3.5-Flash API: modelstudio.console.alibabac… Try in Qwen Chat πŸ‘‡ Flash: chat.qwen.ai/?models=qwen3.5… 27B: chat.qwen.ai/?models=qwen3.5… 35B-A3B: chat.qwen.ai/?models=qwen3.5… 122B-A10B: chat.qwen.ai/?models=qwen3.5… Would love to hear what you build with it. β€” https://nitter.net/Alibaba_Qwen/status/2026339351530188939#m

β†’ View original post on X β€” @maximelabonne, 2026-02-24 17:01 UTC

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *