We couldn’t host Replit Developer Day without having an opportunity for developers to build. After our keynote event, we hosted builders in our SF office, and as always, the demos were incredible! Here’s a sampling of what they built
@replit
-
Andrew Gao Creates AI Mockumentary App with Cloned Voice
By
–
@itsandrewgao is known for his creative AI projects on Replit, like https://t.co/x6BzlwidbG, but his hack night project was some of his best work yet.
— Replit ⠕ (@Replit) 28 avril 2023
He digitally cloned David Attenborough’s voice, and created an app that makes a mockumentary with a single prompt. pic.twitter.com/8RZdeTm6Gu@itsandrewgao is known for his creative AI projects on Replit, like http://
biblegpt.org, but his hack night project was some of his best work yet. He digitally cloned David Attenborough’s voice, and created an app that makes a mockumentary with a single prompt. -
Senior Developers Fixing Junior Developer Production Bug
By
–
Senior developers fixing a bug that a junior developer pushed to production. pic.twitter.com/02HNtDZrgs
— Replit ⠕ (@Replit) 28 avril 2023Senior developers fixing a bug that a junior developer pushed to production.
-
Fast-Trained AI Model Achieves Strong Benchmarks, 7B Model Incoming
By
–
The important piece of info to remember here is that we trained this model in under 10 days. The training was even called internally “YOLO RUN”. We’re excited about these benchmarks but can do even better. We've already started work to train a 7B model.
-
Code Model Demonstrates Surprising Non-Coding Reasoning Capabilities
By
–
We also noticed a surprising capability in non-coding reasoning, despite the model being trained entirely on code. We benchmarked against models trained for reasoning tasks and replit-code-v1-3b performed incredibly well.
-
Replit Fine-Tuned Model Outperforms Codex with Better Efficiency
By
–
Both models also benchmark impressively well against commercial models. replit-finetuned-v1-3b is by far the smallest model on the table and it outperformed Codex and LLaMA. PaLM-Coder is 200x larger and we’re closing in on their performance with much better latency.
-
Replit Open-Sources Complete Code Model 2.7B
By
–
At #ReplitDevDay, we announced we’ve trained and are open-sourcing our first Complete Code model. Introducing replit-code-v1-3b: – 2.7B params
– 20 languages
– 525B tokens
– 40% better than comparable models
– Trained in 10 days Take a look at the benchmarks yourself -
Replit Code Models Outperform Larger Open Source Competitors
By
–
Replit-code-v1-3b & replit-finetuned-v1-3b were trained entirely on code and were meant for single-line code completion. We didn’t expect either to perform so well on HumanEval, but they did. replit-finetuned-v1-3b outperformed all OSS code models, even those 5x its size.
-
Replit Launches Major Platform Upgrades at ReplitDevDay
By
–
Yesterday, we launched a flurry of the most significant upgrades to ever come to Replit. If you couldn't tune in to #ReplitDevDay, you're in luck. We just released the keynote. Watch til the end to see our most exciting announcement .
-
Replit AI Hack Night Demos Underway Live Updates
By
–
We're going on hour two of demos at the Replit AI hack night.