AI Dynamics

Global AI News Aggregator

ReLoRA Paper: Exploring LoRA for LLM Pretraining Beyond Finetuning

Just catching up with the ReLoRA paper (
https://
arxiv.org/abs/2307.05695) that explores whether LoRA can be used for pretraining LLMs (vs finetuning). Looks promising! Caveat: they pretrained models up to 350 M parameters (the smallest Llama model is 7 B parameters, for comparison)

→ View original post on X — @rasbt,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *