AI Dynamics

Global AI News Aggregator

Small 3: New Efficient 24B Open Source Language Model

Introducing Small 3, our most efficient and versatile model yet! Pre-trained and instructed version, Apache 2.0, 24B, 81% MMLU, 150 tok/s. No synthetic data so great base for anything reasoning – happy building!

→ View original post on X — @mistralai,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *