Ah sorry, I meant M1/M2 chips (not specifically M1/2 CPUs). As far as I know, the 4-bit NormalFloat format that is used in QLoRA is currently only supported on Nvidia GPUs (
https://
github.com/TimDettmers/bi
tsandbytes/issues/485
…). Maybe the repo you mentioned uses a different type of quantized training.
QLoRA 4-bit NormalFloat format supported only on Nvidia GPUs
By
–
Leave a Reply