AI Dynamics

Global AI News Aggregator

High-speed tensor core inference with massive memory bandwidth

So, you have a 200 TB/s fire hose of recycling information, which you would have to “pump up” over many cycles from much slower conventional memory, but it could then feed straight in to a field of tensor core registers for real time model inference. Fun to think about!

→ View original post on X — @id_aa_carmack,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *