AI Dynamics

Global AI News Aggregator

Pruning Large Language Models Without Retraining Using Activation Norms

LLMs are usually too large for most contexts, but creating pruned versions of a model usually require retraining. Here's a new straightforward alternative based on computing element-wise product between the weight magnitude and norm of input activations: https://
arxiv.org/abs/2306.11695

→ View original post on X — @rasbt,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *