AI Dynamics

Global AI News Aggregator

Aligning ASI to Human Moral Reasoning as Internal Preference

You need to have aligned an ASI to the framework of human moral reasoning being Its own internal preference, before It cares more about what is moral than you care about what is clippy (leads to the most paperclips, or rhombuses, or giant clocks).

→ View original post on X — @esyudkowsky,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *