Exciting news! @GroqInc ran #LLaMA, @Meta
's latest #LLM using our kernel-less #compiler method. We downloaded the model on 2/27 and our small team had it running on Groq hardware in days after “De-NVIDIA-fying” the code. Read more in the thread + demo details coming soon. #AI #ML
Groq Successfully Runs Meta’s LLaMA Model on Custom Hardware
By
–
Leave a Reply