“SKILL0: In-Context Agentic Reinforcement Learning for Skill Internalization” Most agent systems use skills like cheat sheets. It retrieves them at runtime, pastes them into the prompt, and hopes the model follows them. This paper suggests why not train the model with those skills, then slowly remove them until it can do the job from memory? So the agent starts training with skill guidance, but over time the helpful skills are taken away. And instead of depending on instructions forever, it learns to absorb them into its own parameters. This turns skills from something the model reads into something the model actually knows, and the result is a more efficient agent with much less context overhead, but still better performance. Empirically, SKILL0 beats strong RL baselines on ALFWorld and Search-QA while using under 0.5k tokens per step.
→ View original post on X — @askalphaxiv, 2026-04-07 07:37 UTC

Leave a Reply