AI Dynamics

Global AI News Aggregator

PEFT enables Whisper large fine-tuning on consumer GPUs efficiently

For a full fine-tuning run, on a @GoogleColab T4 GPU, Whisper large model throws an OOM. Through PEFT, we are not just able to fine-tune the Whisper large checkpoint but also squeeze in a batch size of 24 in < 8GB VRAM on a consumer GPU

→ View original post on X — @reach_vb,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *