AI Dynamics

Global AI News Aggregator

LoRA Finetuning: Shifting the Balance in LLM Adaptation

There's a chance that LoRA finetunes work so well that it dramatically alters the finetuning vs. retrieval + few-shot prompting power dynamic in favor of the former for many applications. PEFT (Parameter Efficient Finetuning, LoRA included) are emerging techniques that make it

→ View original post on X — @karpathy,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *