BFloat16 is a format introduced in Ampere architectures (Nvidia A100 cards) or newer. M1 chips for some reason have also bfloat16 in PyTorch, but I think this MPS bfloat16 is something very different.
BFloat16 Support in Ampere GPUs and M1 Chips
By
–
Leave a Reply