A. D. Joseph, B. Nelson, B. I. P. Rubinstein, J. Tygar, Adversarial machine learning (Cambridge University Press, 2018).
B. Biggio, F. Roli, Wild patterns: ten years after the rise of adversarial machine learning. Pattern. Recog.84:, 317–331 (2018).
Article
Google Scholar
N Dalvi, P Domingos, Mausam, S Sanghai, D Verma, in Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD). Adversarial classification (Seattle, 2004), pp. 99–108.
D. Lowd, C. Meek, in Second Conference on Email and Anti-Spam (CEAS). Good word attacks on statistical spam filters (Mountain ViewUSA, 2005).
Google Scholar
B. Biggio, B. Nelson, P. Laskov, in 29th Int’l Conf. on Machine Learning, ed. by J. Langford, J. Pineau. Poisoning attacks against support vector machines (Omnipress, 2012), pp. 1807–1814.
B Biggio, I Corona, D Maiorca, B Nelson, Šrndić, P Laskov, G Giacinto, F Roli, in Machine Learning and Knowledge Discovery in Databases (ECML PKDD), Part III, 8190, ed. by H. Blockeel, K. Kersting, S. Nijssen, and F. železný. Evasion attacks against machine learning at test time, LNCS (Springer Berlin Heidelberg, 2013), pp. 387–402.
C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, R. Fergus, in International Conference on Learning Representations. Intriguing properties of neural networks (ICLRCalgary, 2014).
Google Scholar
I. J. Goodfellow, J. Shlens, C. Szegedy, in International Conference on Learning Representations. Explaining and harnessing adversarial examples (ICLRSan Diego, 2015).
Google Scholar
A. Globerson, S. T Roweis, in Proceedings of the 23rd International Conference on Machine Learning, ed. by W. W. Cohen, A. Moore. Nightmare at test time: robust learning by feature deletion, vol. 148 (ACM, 2006), pp. 353–360. https://doi.org/10.1145/1143844.1143889.
M. Brückner, C. Kanzow, T. Scheffer, Static prediction games for adversarial learning problems. J. Mach. Learn. Res.13, 2617–2654 (2012).
S. Rota Bulò, B. Biggio, I. Pillai, M. Pelillo, F. Roli, Randomized prediction games for adversarial machine learning. IEEE Trans. Neural Netw. Learn. Syst.28(11), 2466–2478 (2017).
M. Melis, A. Demontis, B. Biggio, G. Brown, G. Fumera, F. Roli, in ICCVW Vision in Practice on Autonomous Robots (ViPAR). Is deep learning safe for robot vision? Adversarial examples against the iCub humanoid (IEEE, 2017), pp. 751–759. https://doi.org/10.1109/iccvw.2017.94.
A. Bendale, T. E. Boult, in IEEE Conference on Computer Vision and Pattern Recognition. Towards open set deep networks, (2016), pp. 1563–1572. https://doi.org/10.1109/cvpr.2016.173.
F. Crecchi, D. Bacciu, B. Biggio, in ESANN ’19. Detecting adversarial examples through nonlinear dimensionality reduction. In press.
J. Lu, T. Issaranon, D. Forsyth, in The IEEE International Conference on Computer Vision (ICCV). Safetynet: detecting and rejecting adversarial examples robustly, (2017). https://doi.org/10.1109/iccv.2017.56.
N. Papernot, P. D. McDaniel, Deep k-nearest neighbors: Towards confident, interpretable and robust deep learning. CoRR. abs/1803.04765 (2018).
A. Athalye, N. Carlini, D. A. Wagner, in ICML, vol. 80 of JMLR Workshop and Conference Proceedings. Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples (JMLR.org, 2018), pp. 274–283.
N. Papernot, P. McDaniel, X. Wu, S. Jha, A. Swami, in 2016 IEEE Symposium on Security and Privacy (SP). Distillation as a defense to adversarial perturbations against deep neural networks, (2016), pp. 582–597. https://doi.org/10.1109/sp.2016.41.
D. Meng, H. Chen, in 24th ACM Conf. Computer and Comm. Sec. (CCS). MagNet: a two-pronged defense against adversarial examples, (2017). https://doi.org/10.1145/3133956.3134057.
N. Carlini, D. A. Wagner, in 10th ACM Workshop on Artificial Intelligence and Security, ed. by B. M. Thuraisingham, B. Biggio, D. M. Freeman, B. Miller, and A. Sinha. Adversarial examples are not easily detected: bypassing ten detection methods, AISec ’17 (ACMNew York, 2017), pp. 3–14.
Google Scholar
N. Carlini, D. A. Wagner, in IEEE Symposium on Security and Privacy. Towards evaluating the robustness of neural networks (IEEE Computer Society, 2017), pp. 39–57. https://doi.org/10.1109/sp.2017.49.
P. Russu, A. Demontis, B. Biggio, G. Fumera, F. Roli, in 9th ACM Workshop on Artificial Intelligence and Security. Secure kernel machines against evasion attacks, AISec ’16 (ACMNew York, 2016), pp. 59–69.
Google Scholar
N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, A. Swami, in Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. Practical black-box attacks against machine learning, ASIA CCS ’17 (ACMNew York, 2017), pp. 506–519.
Google Scholar
A. Demontis, M. Melis, M. Pintor, M. Jagielski, B. Biggio, A. Oprea, C. Nita-Rotaru, F. Roli, in 28th USENIX Security Symposium (USENIX Security 19). Why do adversarial attacks transfer? Explaining transferability of evasion and poisoning attacks (USENIX Association, 2019).
M. Melis, D. Maiorca, B. Biggio, G. Giacinto, F. Roli, in 26th European Signal Processing Conf. Explaining black-box android malware detection, EUSIPCO (IEEERome, 2018), pp. 524–528.
Google Scholar
B. Biggio, G. Fumera, F. Roli, Security evaluation of pattern classifiers under attack. IEEE Trans. Knowl. Data Eng.26, 984–996 (2014).
D. H. Wolpert, Stacked generalization. Neural Netw.5:, 241–259 (1992).
Article
Google Scholar
W. Scheirer, L. Jain, T. Boult, Probability models for open set recognition. IEEE Trans. Patt. An. Mach. Intell. 36(11), 2317–2324 (2014).
J. Duchi, S. Shalev-Shwartz, Y. Singer, T. Chandra, in Proceedings of the 25th International Conference on Machine Learning, ICML ’08. Efficient projections onto the l1-ball for learning in high dimensions (ACMNew York, 2008), pp. 272–279.
Chapter
Google Scholar
M. Melis, A. Demontis, M. Pintor, A. Sotgiu, B. Biggio, secml: A Python library for secure and explainable machine learning. arXiv (2019).
S. Thulasidasan, T. Bhattacharya, J. Bilmes, G. Chennupati, J. Mohd-Yusof, Knows when it doesn’t know: deep abstaining classifiers, (OpenReview.net, 2019). https://openreview.net/forum?id=rJxF73R9tX.
R. E. -Y. Yonatan Geifman, in Proceedings of the 36th International Conference on Machine Learning, (ICML) 2019, 97. Selectivenet: a deep neural network with an integrated reject option (PMLRLong Beach, 2019), pp. 2151–2159.
Google Scholar
Y. Geifman, R. El-Yaniv, in Advances in Neural Information Processing Systems 30, ed. by I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett. Selective classification for deep neural networks (Curran Associates, Inc., 2017), pp. 4878–4887.
F. Carrara, R. Becarelli, R. Caldelli, F. Falchi, G. Amato, in The European Conference on Computer Vision (ECCV) Workshops. Adversarial examples detection in features distance spaces, (2018). https://doi.org/10.1007/978-3-030-11012-3_26.
T. Pang, C. Du, J. Zhu, in Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018. Max-mahalanobis linear discriminant analysis networks (PMLR, 2018), pp. 4013–4022.
M. Jagielski, A. Oprea, B. Biggio, C. Liu, C. Nita-Rotaru, B. Li, in IEEE Symposium on Security and Privacy, SP ’18. Manipulating machine learning: poisoning attacks and countermeasures for regression learning (IEEE CS, 2018), pp. 931–947. https://doi.org/10.1109/sp.2018.00057.
H. Xiao, B. Biggio, G. Brown, G. Fumera, C. Eckert, F. Roli, in JMLR W&CP - Proc. 32nd Int’l Conf. Mach. Learning (ICML), ed. by F. Bach, D. Blei. Is feature selection secure against training data poisoning? vol. 37 (PMLRLille, 2015), pp. 1689–1698.
S. Mei, X. Zhu, in 29th AAAI Conf. Artificial Intelligence (AAAI ’15). Using machine teaching to identify optimal training-set attacks on machine learners (Austin, 2015).
Y. Lecun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning applied to document recognition. Proc. IEEE. 86, 2278–2324 (1998).
A. Krizhevsky, Learning multiple layers of features from tiny images (University of Toronto, 2012).