How much RAM do you need to run tiny models? Jamba Reasoning 3B runs on just 2.25 GiB, the lightest among small models like Qwen (
@Alibaba_Cloud
), Llama (
@Meta
), Granite (
@IBM
), and Gemma (
@GoogleDeepMind
). Try Jamba Reasoning 3B yourself: https://
huggingface.co/collections/ai
21labs/jamba-reasoning-3b
…
Jamba Reasoning 3B: Lightest Model Running on Just 2.25 GiB RAM
By
–
Leave a Reply