AI Dynamics

Global AI News Aggregator

Camouflage Data Poisoning Attacks in ML Unlearning Systems

But when you add something new to an ML pipeline, there's new ways for adversaries to wreak havoc. We introduce a new type of data poisoning attack exploiting the dynamic nature of unlearn requests: a "camouflage" attack, which lies dormant until triggered by the adversary. 3/n

→ View original post on X — @thegautamkamath,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *