It's well known that NNs are very vulnerable to adversarial interventions, e.g., indiscernible test-time attacks. But indiscriminate data poisoning, wherein the attacker modifies a small fraction of the training data to reduce the test accuracy, seems to be much harder! Why? 2/n
Why Neural Networks Resist Data Poisoning Attacks Better
By
–
Leave a Reply