What I'm excited about: Use Llama-3.1-405B to generate a high quality reasoning dataset, then fine-tune Llama-3.1-8B models.
— Jiquan Ngiam (@JiquanNgiam) 23 juillet 2024
Imagine having 4o-mini performance models, but open sourced, and tuned for your use cases. pic.twitter.com/mf9tMZ8vWM
What I'm excited about: Use Llama-3.1-405B to generate a high quality reasoning dataset, then fine-tune Llama-3.1-8B models. Imagine having 4o-mini performance models, but open sourced, and tuned for your use cases.
Leave a Reply