Perplexity's Sonar—built on Llama 3.3 70b—outperforms GPT-4o-mini and Claude 3.5 Haiku while matching or surpassing top models like GPT-4o and Claude 3.5 Sonnet in user satisfaction.
— Perplexity (@perplexity_ai) 11 février 2025
At 1200 tokens/second, Sonar is optimized for answer quality and speed. pic.twitter.com/cNhb39PEVV
Perplexity's Sonar—built on Llama 3.3 70b—outperforms GPT-4o-mini and Claude 3.5 Haiku while matching or surpassing top models like GPT-4o and Claude 3.5 Sonnet in user satisfaction. At 1200 tokens/second, Sonar is optimized for answer quality and speed.
Leave a Reply