I quite like the idea using games to evaluate LLMs against each other, instead of fixed evals. Playing against another intelligent entity self-balances and adapts difficulty, so each eval (/environment) is leveraged a lot more. There's some early attempts around. Exciting area.
Games as Dynamic LLM Evaluation: Self-Balancing Difficulty Assessment
By
–
Leave a Reply