AI Dynamics

Global AI News Aggregator

LLMs Overfitting to RAG Context: A Systemic Training Bias

(I cycle through all LLMs over time and all of them seem to do this so it's not any particular implementation but something deeper, e.g. maybe during training, a lot of the information in the context window is relevant to the task, so the LLMs develop a bias to use what is given, then at test time overfit to anything that happens to RAG its way there via a memory feature (?))

→ View original post on X — @karpathy, 2026-03-25 16:22 UTC

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *