Covered some of this re: Nvidia’s (PyTorch) cuda lock-in, which is mostly for what I’d call “greedy models”—“low”-end hardware will be pretty good for a lot of inference (T4, A10, CPUs). Part of what makes Nvidia’s backing for PaxML interesting as well. https://
supervised.news/p/greedy-model
s-and-nvidias-open-source
…
Nvidia CUDA Lock-In and Greedy Models Hardware Strategy
By
–
Leave a Reply