Mistral AI released Small 3.1, a SOTA multilingual and multimodal LLM —24B (can run on a laptop)
—128k token context window —Outperforms Gemma 3 and GPT-4o Mini on most benchmarks
—Inference speed of 150 tokens/sec
—Open-source under Apache 2.0 license
Mistral AI Releases Small 3.1 Multilingual Multimodal LLM
By
–
Leave a Reply