The weights are out! Here's the GLM-5 architecture comparison. GLM-5 is: – bigger than its predecessor (mainly more experts) but has rel. similar active parameter counts – uses multi-head latent attention – uses DeepSeek Sparse Attention
GLM-5 Architecture: Weights Released, Key Features Unveiled
By
–
Leave a Reply