Some context on why our smallest Llama 3 model went from 7B → 8B. More details on the changes to the tokenizer in the full conversation with @astongzhangAZ ➡️ https://t.co/cKuUfwmuZ4 pic.twitter.com/OfqCJtfSJW
— AI at Meta (@AIatMeta) 16 juillet 2024
Some context on why our smallest Llama 3 model went from 7B → 8B. More details on the changes to the tokenizer in the full conversation with @astongzhangAZ https://
youtu.be/Tmdk_H2WDj4
Leave a Reply