AI Dynamics

Global AI News Aggregator

LFM2-8B-A1B: Efficient MoE Language Model Released

LFM2-8B-A1B just dropped on @huggingface
! 8.3B params with only 1.5B active/token > Quality ≈ 3–4B dense, yet faster than Qwen3-1.7B
> MoE designed to run on phones/laptops (llama.cpp / vLLM)
> Pre-trained on 12T tokens → strong math/code/IF

→ View original post on X — @maximelabonne,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *