AI Dynamics

Global AI News Aggregator

Llama 3.1 70B Recipe Adapted with LoRA Optimization

I adapted this recipe to Llama 3.1 70B using @failspy
's Meta-Llama-3-70B-Instruct-abliterated-v3.5 and optimized the LoRA rank. GGUF quants are ready and there shouldn't be any issue with the tokenizer. Special thanks to him and to grimjim for the model and this technique.

→ View original post on X — @maximelabonne,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *