"and other models?" Here's a quick demo of us running @MistralAI 7B, a smaller #LLM, on a Groq LPU™ system. No lag, just "fluid & fluent" experiences, from the (#LPU) Language Processing Unit advantages. http://
Groq.com #groqspeed #betterongroq https://
youtu.be/9c078xKGwdU
Groq LPU Demonstrates Mistral AI 7B Model Performance
By
–
Leave a Reply