AI Dynamics

Global AI News Aggregator

Why Neural Networks Resist Data Poisoning Attacks Better

It's well known that NNs are very vulnerable to adversarial interventions, e.g., indiscernible test-time attacks. But indiscriminate data poisoning, wherein the attacker modifies a small fraction of the training data to reduce the test accuracy, seems to be much harder! Why? 2/n

→ View original post on X — @thegautamkamath,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *