We are excited to partner with @AIatMeta to welcome Llama 4 Maverick (402B) & Scout (109B) natively multimodal Language Models on the Hugging Face Hub with Xet Both MoE models trained on up-to 40 Trillion tokens, pre-trained on 200 languages and significantly outperforms its
Meta Llama 4 Maverick and Scout: Massive Multimodal Models Launch
By
–
Leave a Reply