nothing to announce there yet, but the latest models available in our API are pretty good! check them out and let us know what you think.
INNOVATION
-
AGI focus: unlocking massive value for startup ecosystem
By
–
we will remain very focused on AGI, but we think there will be a _gigantic_ amount of value unlocked for the world along the way. we want to enable startups to go after it.
-
OpenAI Announces New AI Company Building Program and Q&A
By
–
have questions about building an AI-powered company or our new program (
https://
openai.fund/news/introduci
ng-converge
…)? ask me and @bradlightcap anything! -
Inma looks forward to AgroTech event in El Ejido since March
By
–
The event I have been waiting for since March when I created the AI in Agro project at @gpai wanting to meet everyone who makes AgroTech a reality in Spain. I hope to see you in El Ejido
-
Groq invites attendees to Supercomputing conference in Dallas
By
–
We're excited to see y'all in Dallas, TX (and let's be honest, for the BBQ) at @Supercomputing
! Stop by booth 3047 and learn more about us http://
groq.com/sc22. #HPC #AI #SuperComputing #HPCaccelerates #SCinet #SCInclusivity #SC22 -
Inverse Scaling Prize Round 2 Evaluation Announcement
By
–
Make sure to check out the inverse scaling prize, which is a great community effort! Looking forward to evaluating on the Round 2 winners 🙂
-
U-shaped Scaling Behavior Emerges at Higher Computational Budgets
By
–
Our results first confirm inverse scaling behavior seen on prior models trained up to 500 zettaFLOPs. But at 2K zettaFLOPs, it becomes U-shaped. U-scaling has also been shown in prior work, such as BIG-Bench.
-
Inverse Scaling Becomes U-Shaped with Larger Language Models
By
–
New preprint!
— Jason Wei (@_jasonwei) 4 novembre 2022
By evaluating 5x larger language models, inverse scaling can become “U-shaped scaling”, which means that performance increases sharply after decreasing.
https://t.co/bZQndKqlB6
These two tasks here are Third Prize winners from the Inverse Scaling Prize. pic.twitter.com/8d3pu8DDrkNew preprint! By evaluating 5x larger language models, inverse scaling can become “U-shaped scaling”, which means that performance increases sharply after decreasing. https://
arxiv.org/abs/2211.02011 These two tasks here are Third Prize winners from the Inverse Scaling Prize. -
Meta AI Creates Largest Protein Language Model with 15B Parameters
By
–
Meta AI researchers trained a language model to fill in protein sequence gaps across millions of diverse proteins & scaled up to 15B parameters, creating the largest language model of proteins to date. More on our latest breakthrough in protein folding https://
bit.ly/3WoWcK2 -
Replicate Increases Default Rate Limits for API Predictions
By
–
We've also increased our default rate limits. You can create 10 predictions a second, bursting up to 600 predictions a second. https://
replicate.com/docs/reference
/http#rate-limits
… We can support higher rates too – just email us: team@replicate.com