AI Dynamics

Global AI News Aggregator

Distributed MLP Training Across Multiple Machines: Architecture

One possible exercise: You have a one-hidden-layer MLP, which is so big that the weights can only fit across 4 machines. Describe how you’d do the forward-backward passes. What would you do if each machine had a small probability of breaking down?

→ View original post on X — @nandodf,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *