Mistral dropped two reasoning models: 24B param, open-source Magistral Small and enterprise-specific Magistral Medium
— Rowan Cheung (@rowancheung) 11 juin 2025
Both models' chain-of-thought works across global languages and alphabets
However, STEM and coding benchmarks lag behind top rivalspic.twitter.com/nfqZQpBlWw
Mistral dropped two reasoning models: 24B param, open-source Magistral Small and enterprise-specific Magistral Medium Both models' chain-of-thought works across global languages and alphabets However, STEM and coding benchmarks lag behind top rivals
Leave a Reply