When we autoregressively sample from a language model, we are, by definition, seeing its hallucination. Users may think otherwise only when the LLM’s “dream” just so happens be an output that is acceptable to the user.
Autoregressive Sampling: Language Models Generate Hallucinations by Design
By
–
Leave a Reply