What LLMs say is sometimes true, sometimes not. They can’t tell the difference: they don’t know how to do validity checks (eg crossreferencing WIki or their own training corpus). That’s what makes it BS. First said it in @techreview 2020; still true: https://
technologyreview.com/2020/08/22/100
7539/gpt3-openai-language-generator-artificial-intelligence-ai-opinion/
…
LLMs Cannot Verify Truth: The Persistent Problem
By
–
Leave a Reply