I guess a case here to not finetune is if you are training your own foundation model and you can't easily finetune the next SOTA base model
Foundation Model Training: When Fine-tuning Isn’t the Right Choice
By
–
Global AI News Aggregator
By
–
I guess a case here to not finetune is if you are training your own foundation model and you can't easily finetune the next SOTA base model
Leave a Reply