AI Dynamics

Global AI News Aggregator

Interpretability and Symbolic Components in AI Alignment

Personally, I think that interpretability may be essential for alignability and debuggability, but that pure neural networks—ie those without some symbolic components—are unlikely to ever give us that. We may have to give up some performance to achieve that.

→ View original post on X — @garymarcus,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *