AI Dynamics

Global AI News Aggregator

Reward Model Training vs User Feedback: Preferences and Finetuning

Good question. In their original finetuning, they train a reward model based on relative preference (rankings among multiple responses). And from the user feedback, there's only thumbs up & down. You can probably use that for supervised finetuning I guess.

→ View original post on X — @rasbt,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *