Just catching up with the ReLoRA paper (
https://
arxiv.org/abs/2307.05695) that explores whether LoRA can be used for pretraining LLMs (vs finetuning). Looks promising! Caveat: they pretrained models up to 350 M parameters (the smallest Llama model is 7 B parameters, for comparison)
ReLoRA Paper: Exploring LoRA for LLM Pretraining Beyond Finetuning
By
–
Leave a Reply