AI Dynamics

Global AI News Aggregator

LLaMa2-70B Training Costs: Computing Efficiency Analysis

Calculations:
LLaMa2-70B was trained on 2T tokens, and 3.3m hours of A100 GPU time. At a HFU of ~60%, LLama2-70B took ~4.4e+24 flops (3.3m * 624TFlops bfloat16 * 60% HFU).

→ View original post on X — @soumithchintala,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *