AI Dynamics

Global AI News Aggregator

Accelerate GPU Training in Colab with TPU Runtime and steps_per_execution

If you're using Colab and you feel like training your model on GPU is slow, switch to the TPU runtime and tune the "steps_per_execution" parameter in model.compile() (higher = more work being done on device before moving back to host RAM) Can often see a 4-5x speedup.

→ View original post on X — @fchollet,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *