AI Dynamics

Global AI News Aggregator

Jamba Reasoning 3B: Lightest Model Running on Just 2.25 GiB RAM

How much RAM do you need to run tiny models? Jamba Reasoning 3B runs on just 2.25 GiB, the lightest among small models like Qwen (
@Alibaba_Cloud
), Llama (
@Meta
), Granite (
@IBM
), and Gemma (
@GoogleDeepMind
). Try Jamba Reasoning 3B yourself: https://
huggingface.co/collections/ai
21labs/jamba-reasoning-3b

→ View original post on X — @ai21labs,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *