AI Dynamics

Global AI News Aggregator

5x Faster Whisper Fine-tuning with LoRA and PEFT

After 70x faster Whisper, we present to you – 5x faster Whisper fine-tuning! Powered by LoRA and PEFT – Squeeze in 5x larger batch sizes, fit Whisper-large checkpoint < 8GB VRAM! Best part? With almost no degradation in WER! Check it out: https://
github.com/Vaibhavs10/fas
t-whisper-finetuning

→ View original post on X — @reach_vb,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *