I adapted this recipe to Llama 3.1 70B using @failspy
's Meta-Llama-3-70B-Instruct-abliterated-v3.5 and optimized the LoRA rank. GGUF quants are ready and there shouldn't be any issue with the tokenizer. Special thanks to him and to grimjim for the model and this technique.
Llama 3.1 70B Recipe Adapted with LoRA Optimization
By
–
Leave a Reply