AI Dynamics

Global AI News Aggregator

DeepSeek R1 and v3: Impact on LLM Training Data

What does DeepSeek R1 & v3 mean for LLM data? Contrary to some lazy takes I’ve seen, DeepSeek R1 was trained on a shit ton of human-generated data—in fact, the DeepSeek models are setting records for the disclosed amount of post-training data for open-source models: – 600,000

→ View original post on X — @alexandr_wang,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *