Catalan @jordimash
's fine-tuned Whisper small checkpoint produced an astounding WER of 8.56 compared to 23.8 by the large checkpoint! Model: https://
huggingface.co/softcatala/whi
sper-small-ca
… Space: https://
huggingface.co/spaces/softcat
ala/whisper-demo-catalan
…
@reach_vb
-
Whisper Small Model Achieves 8.56 WER for Catalan Speech
By
–
-
Lithuanian Whisper Model Achieves Major WER Improvement
By
–
Lithuanian @DeividasMat
's fine-tuned Whisper medium checkpoint managed to get the WER down from 35.2 (large) -> 20.44 Model: -
Whisper Fine-tuning Event Results with LambdaAPI
By
–
10 days into the Whisper fine-tuning event powered by @LambdaAPI and we are well on our descent into the WER land! Here are a select few examples of fine-tuned models and how they compare with zero-shot performance Whisper!
-
Fine-tuned Models Testing on Public Datasets
By
–
Interesting. What kind of data are you inferring on? Is there some publicly available data that you’re testing on? Would love to test out some of our fine-tuned models against it.
-
Whisper Model 8-bit Loading: Memory-Efficient Inference
By
–
Like all the models on the transformers models, all Whisper checkpoints can be loaded in a memory-efficient way! With load_in_8bit=True you can load the model with 8-bit precision. P.S. You can load a Whisper-large model < 6.6 gig VRAM
-
Whisper: Transcribe Audio and YouTube Videos with AI
By
–
Transcribe audio and YouTube videos with any pre-trained or fine-tuned Whisper checkpoint on the hub! All you need to do is duplicate this space and update your checkpoint name and language! https://
huggingface.co/spaces/whisper
-event/whisper-demo
… -
Whisper Models Hub: Demos and Scripts Showcase
By
–
With great models, come excellent capabilities! With the influx of many brilliant models on the hub, we created some demos and scripts to showcase what the Whisper models are capable of! Here's a thread of all the cool stuff you can do with Whisper models on the Hub
-
LambdaAPI Giveaway: $330 Cloud Credits for Whisper Fine-tuning
By
–
Last but not least! @LambdaAPI will giveaway 330$ cloud to the 3 participants of the event. Anyone who shares their journey fine-tuning the Whisper model on twitter and tag @huggingface & @LambdaAPI would be eligible. 330$ is enough to get 300 1x A100 compute
-
Whisper Large Czech Fine-tuned Model Achieves 10.278 WER
By
–
Whisper large checkpoint fine-tuned for Czech with a WER of 10.278 by @miker