There's a new king of open-source. The new LLM from @databricks just beat Mixtral! – 132B total params (16 experts), 36B active (4 experts)
– Trained on *12T* tokens
– 32K context length From my initial tests, I'm really impressed.
Databricks New LLM Beats Mixtral in Open-Source Race
By
–
Leave a Reply