AI Dynamics

Global AI News Aggregator

MIT Improves Reasoning Model Confidence Calibration Through RL Training

How do top reasoning models become overconfident? MIT found that RL rewards correct answers w/o considering how sure the model is. By training them to estimate their confidence about each answer, the team boosted uncertainty estimates w/o hurting accuracy:

→ View original post on X — @mit_csail,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *