The interview covers AI's risk of outputting false information, which we usually refer to as "hallucinations". She highlights solutions like human-informed reinforcement learning and systems seeking clarity during uncertainties.
AI Hallucinations: Solutions Through Human Feedback and Uncertainty
By
–
Leave a Reply