AI Dynamics

Global AI News Aggregator

LLMs Cannot Verify Truth: The Persistent Problem

What LLMs say is sometimes true, sometimes not. They can’t tell the difference: they don’t know how to do validity checks (eg crossreferencing WIki or their own training corpus). That’s what makes it BS. First said it in @techreview 2020; still true: https://
technologyreview.com/2020/08/22/100
7539/gpt3-openai-language-generator-artificial-intelligence-ai-opinion/

→ View original post on X — @garymarcus,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *