AI Dynamics

Global AI News Aggregator

Superposition Strategy: How Neural Networks Embed Features in Hidden Space

Our prior work showed that these toy models use a strategy called “superposition” to learn more features than available neurons. Here we observe how training data points, as well as features, are embedded in the hidden space.

→ View original post on X — @anthropicai,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *