From: Machine learning security and privacy: a review of threats and countermeasures
Reference | Machine learning model/ algorithm | Attack type | Exploited vulnerability | Attacker’s knowledge | Attacker’s goals | Attack severity and impact | Defined threat model | Targeted feature |
---|---|---|---|---|---|---|---|---|
F. A. Yerlikaya et al. [16], 2022 | SVM, SGD, logistic regression, random forest, Gaussian NB, K-NN | Random label and distance-based label flipping attacks | Poisoning dataset by changing class labels with two effective strategies | White box attack | Reduce performance (accuracy) of the system | KNN and random forest algorithms not much affected with label poisoning attacks | No | Model accuracy |
M. Jagielski et al. [47], 2020 | Convolution neural networks | Subpopulation attack | Poisoned cluster is integrated as sub-proportion of training dataset | Gray box attack | Misclassification targeted attack | Subpopulation attacks are difficult to detect and mitigate specifically in non-linear models | Yes | Test time prediction |
A. Demontis et al. [58], 2019 | SVM classifier, logistic, ridge, SVM-RBF | Training time poisoning attack | Reduced gradient loss with poisoned data points in transferable setting | White box, black box attacks | Violate model’s integrity and availability | Poisoning attacks are more effective on models with large gradient space and high complexity | Yes | Model availability |
C. Zhu et al. [14], 2019 | Deep neural networks | Feature collision attack, convex polytope attack | Feature space with perturbed training samples | Gray box attack | Over fit target classifier with poisoned dataset | Turning dropout during training with poisoned data enhance transferability of poisoning attack in deep neural networks | Yes | Test time misclassification |
M. Jagielski et al. [59], 2018 | Linear regression | Statistically based regression points poisoning generation with flipped labels | Distinguishing legitimate and poisoned regression points with minimal gradient loss | Mean and co-variance dependent gray box attack | Misclassification of the system | Residual filtrating mitigates poisoning attack on linear regression | Yes | Model accuracy |