I would not use the term "preference" to describe what an agent *says* is better, only what an agent *does*. It's fragile enough with humans (hence "revealed preference") but with LLMs any relationship between the two must be established from scratch.
Leave a Reply