AI Dynamics

Global AI News Aggregator

Advanced Llama Architecture: Rotary Embeddings and ReLU² MLP

> llama-like architecture
> dense transformer > rotary only (no positional embeddings) > qk norm > untied embedding/unembedding > norm after token embedding > relu² mlp > no biases in linears > no learnable rmsnorm params > mqa > logit softcap > optimizer =

→ View original post on X — @theahmadosman,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *