Whisper JAX is incredibly fast btw – give it a shot! It works quite well btw!! Happy to help tune it to fit the needs!
@reach_vb
-
Hugging Face Offers to Open Source Sing Song Model
By
–
Hey hey @chrisdonahuey – We at Hugging Face are a huge fan of Sing Song! Would love to collaborate and perhaps open source the weights on hub or create a demo for everyone to play along? π Happy to help!
-
Hugging Face Transformers Reaches 100,000 GitHub Stars
By
–
100000 stars on GitHub!! π€
— Vaibhav (VB) Srivastav (@reach_vb) 17 mai 2023
What an amazing feat! Kudos to all the contributors, open-source organisations, and researchers who continue to help push the boundaries collectively! β‘οΈ
Psyched to be a part of the journey that takes us to 1M! π₯https://t.co/ILWD4xUgHs pic.twitter.com/szyJeI0V3D100000 stars on GitHub!! What an amazing feat! Kudos to all the contributors, open-source organisations, and researchers who continue to help push the boundaries collectively! Psyched to be a part of the journey that takes us to 1M! https://
github.com/huggingface/tr
ansformers
β¦ -
Is this transformers library the same as Hugging Face?
By
–
Is this transformers the same as: https://
github.com/huggingface/tr
ansformers
β¦ -
5x Faster Performance Through Larger Batch Size Optimization
By
–
Good question: Itβs 5x faster because weβre able to fit 5x larger batch sizes!
-
Fine-tuning Whisper now possible on consumer GPUs
By
–
Hahaha! Iβll rephrase a bit: βOkay, so now I can have a whisper finetuned on a consumer GPUβ
-
Fine-tuning Whisper with Common Voice 13 Guide
By
–
Last but not the least! This guide also walks you through the process of fine-tuning a Whisper checkpoint with Common Voice 13!
-
PEFT Enables Sub-60MB Model Checkpoints for Portability
By
–
It gets better, thanks to the magic of PEFT, the resultant checkpoints are less than 60MB in size! Thereby, making a strong case for model weights portability! Learn more about it here:
-
PEFT enables Whisper large fine-tuning on consumer GPUs efficiently
By
–
For a full fine-tuning run, on a @GoogleColab T4 GPU, Whisper large model throws an OOM. Through PEFT, we are not just able to fine-tune the Whisper large checkpoint but also squeeze in a batch size of 24 in < 8GB VRAM on a consumer GPU