I really like the LLaMA-Adapter method! In a nutshell, it's about finetuning LLMs more efficiently. In a nutshell, it's adding a tunable prefix to the key and value tensors in the self-attention layers. Fun fact: it's not specific to LLaMA. Use it with any LLM!
LLaMA-Adapter: Efficient LLM Fine-tuning with Tunable Prefixes
By
–
Leave a Reply