AI Dynamics

Global AI News Aggregator

Truthseeking in AGI Development: Evaluating Expected Outcome Shifts

Or do you mean maximum truthseeking in the people thinking about and building AGI? If so, I'd ask you whether you very carefully and neutrally evaluated exactly how much expected outcome shift could be produced that way – which is what truthseeking looks like, in humans.

→ View original post on X — @esyudkowsky,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *