Just added QLoRA support for all LLMs in Lit-GPT: Llama 2, Falcon, Pythia, StableLM, and all others! You can use it via `python finetune/lora.py –quantize "bnb.nf4"` to save significant GPU memory. I've ran a few more benchmarks on the Gh PR here: https://
github.com/Lightning-AI/l
it-gpt/pull/275
…
QLoRA Support Added to Lit-GPT for All LLM Models
By
–
Leave a Reply