That’s great, have you heard about http://
hugging.chat
@reach_vb
-
Hugging Chat: Exploring Open Source AI Conversational Platform
By
–
-
Two Open Text-to-Speech Models Challenge ElevenLabs Monthly
By
–
That's two open Text to Speech models in the same month that are in the same arena as ElevenLabs!
— Vaibhav (VB) Srivastav (@reach_vb) 30 octobre 2024
Slowly, then suddenly! https://t.co/vvYkTFnGHq pic.twitter.com/0IMRP8KDRqThat's two open Text to Speech models in the same month that are in the same arena as ElevenLabs! Slowly, then suddenly!
-
MaskGCT Open Source Text-to-Speech Model Achieves New SoTA
By
–
Fuck yeah! MaskGCT – New open SoTA Text to Speech model! 🔥
— Vaibhav (VB) Srivastav (@reach_vb) 30 octobre 2024
> Zero-shot voice cloning
> Emotional TTS
> Trained on 100K hours of data
> Long form synthesis
> Variable speed synthesis
> Bilingual – Chinese & English
> Available on Hugging Face
Fully non-autoregressive… pic.twitter.com/CAUX6cTiAGFuck yeah! MaskGCT – New open SoTA Text to Speech model! > Zero-shot voice cloning
> Emotional TTS
> Trained on 100K hours of data
> Long form synthesis
> Variable speed synthesis
> Bilingual – Chinese & English
> Available on Hugging Face Fully non-autoregressive -
Hugging Face Hub acquisition: legendary team and tool integration
By
–
Has to go down the history books! What a legendary team and acquisition! Coincidentally, outside of transformers my real entry point to HF Hub was through spaces and Gradio. Still remember the feeling of putting together a nicely packaged demo in less than 7 lines of code.
-
Aymeric Roucher Could Revolutionize the LLM Landscape
By
–
You don't know it yet, but, @AymericRoucher might single handedly change the LLM landscape forever!
-
Hugging Face Race: Which AI Org Hits 10K Followers First?
By
–
Which Hugging Face Org would be the first to reach 10K followers? @AIatMeta at 6815 @StabilityAI at 5378 @Microsoft at 4302 @GoogleAI at 3867 Let the battle begin!
-
Documentation importance and developer diversity in software development
By
–
I disagree and that’s a bad take IMO – some of the great devs that I know and look up to are from India.. and besides updating docs is quite important for code/ library adoption. FWIW – I update docs for libs that I care about and for our internal libs too.
-
Meta Layer Skip Enables 200% Faster Transformer Inference
By
–
Meta presents Layer Skip – up-to 200% fast inference > Applies layer dropout: low rates for early layers, high rates for later layers
> Uses early exit loss with shared exit for all transformer layers Inference: > Increases early exit accuracy without auxiliary layers
>