SORA release looks eminent, also, a minute of silence for MKBHD for breaking embargo
@reach_vb
-
Llama 3.3 70B Trending on Hugging Face Hub Community
By
–
The community seems to be loving it! Llama 3.3 70B by @AIatMeta trending #2 on the Hub
-
Meta Authentication Issues and Geographic Filtering Solutions
By
–
Hey hey! Damn, that looks rough – Meta has its own process for running authentication. HF doesn’t have any control on that. I’m not sure why would they reject tho. I suspect it’s because of some form of Geographical filter. Could you open a discussion on the repo and link it
-
Microsoft TRELLIS 3D Generation Model for Free Asset Creation
By
–
This is RICULOUSLY good, TRELLIS 3D Generation model by Microsoft! 🔥
— Vaibhav (VB) Srivastav (@reach_vb) 8 décembre 2024
Generate high-quality 3D assets from text or image prompts. Supports various formats like Radiance Fields, 3D Gaussians, and meshes
Available for FREE on Hugging Face! pic.twitter.com/V9VHdiAe8lThis is RICULOUSLY good, TRELLIS 3D Generation model by Microsoft! Generate high-quality 3D assets from text or image prompts. Supports various formats like Radiance Fields, 3D Gaussians, and meshes Available for FREE on Hugging Face!
-
Open Release Strategy for Flash 8B Model Discussed
By
–
would y'all consider open release for flash 8B once the new generation kicks in?
-
Llama Models Permissiveness and User Base Restrictions
By
–
FWIW – I do think Llama models are quite a bit permissive, restrictions apply at > 1M user-base. At that ceiling you'd probably have a much much stronger model already 🙂
-
OuteTTS and LLM Direction in Text-to-Speech Technology
By
–
Good call – I quite liked OuteTTS recently, thinking the LLM direction is still a bit slept on.
-
Llama Stack Building Open Ecosystem and Business Opportunities
By
–
Quite bullish on the whole llama stack – Even if 50% of this wishlist is completed we'd be raising a huge array of open ecosystem, business around it.
-
Pre-training Data Diversity Drives Major AI Model Improvements
By
–
I think a lot depends on how much pre-training data went into and how diverse was it – we made a massive jump from the models in 2023 to 2024. I'm quite sure the jump from 2024 to 2025 will be quite significant too!