AI Dynamics

Global AI News Aggregator

LoRA Weight Matrix Factorization During Fine-tuning Explained

You mean as in LoRA (low-rank adaptation)? You are factorizing the weight matrices but only during finetuning. The self-attention mechanism computation is still the same (if you ignore that the weight matrices are different).

→ View original post on X — @rasbt,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *