AI Dynamics

Global AI News Aggregator

GPT-3 Training Surpasses Expected Performance on FineWeb Dataset

Example here is the llm.c GPT-3 (124M) training on FineWeb (figure cropped at 250B tokens), we seem to surpass GPT-3 HellaSwag (green line) at ~150B tokens, per paper expected this to be at 300B tokens. Will re-run with FineWeb-Edu. I do want to be a bit careful on conclusions

→ View original post on X — @karpathy,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *