E. Real, A. Aggarwal, Y. Huang, Q. V. Le, Regularized Evolution for Image Classifier Architecture Search. Proc. AAAI Conf. Artif. Intell. 33(01), 4780–4789 (2019). https://doi.org/10.1609/aaai.v33i01.33014780.
Google Scholar
B. Zoph, V. Vasudevan, J. Shlens, Q. V. Le, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Learning transferable architectures for scalable image recognition, (2018), pp. 8697–8710.
J. Hu, L. Shen, G. Sun, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Squeeze-and-excitation networks, (2018), pp. 7132–7141.
C. -C. Chiu, T. N. Sainath, Y. Wu, R. Prabhavalkar, P. Nguyen, Z. Chen, A. Kannan, R. J. Weiss, K. Rao, E. Gonina, et al, in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). State-of-the-art speech recognition with sequence-to-sequence models (IEEECalgary, 2018), pp. 4774–4778.
Chapter
Google Scholar
J. Devlin, M. -W. Chang, K. Lee, K. Toutanova, in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), ed. by J. Burstein, C. Doran, and T. Solorio. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding (Association for Computational Linguistics, 2019), pp. 4171–4186https://doi.org/10.18653/v1/n19-1423.
G. Hinton, L. Deng, D. Yu, G. Dahl, A. -R. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, B. Kingsbury, et al, Deep neural networks for acoustic modeling in speech recognition. IEEE Signal Process. Mag.29:, 82–97 (2012).
Article
Google Scholar
W. Xu, Y. Qi, D. Evans, in 23rd Annual Network and Distributed System Security Symposium, NDSS 2016, San Diego, California, USA, February 21-24, 2016. Automatically Evading Classifiers: A Case Study on PDF Malware Classifiers (The Internet Society, 2016). http://wp.internetsociety.org/ndss/wp-content/uploads/sites/25/2017/09/automatically-evading-classifiers.pdf.
C. Nachenberg, J. Wilhelm, A. Wright, C. Faloutsos, in ACM KDD. Polonium: Tera-scale graph mining for malware detection (Washington, DC, 2010).
M. A. Rajab, L. Ballard, N. Lutz, P. Mavrommatis, N. Provos, in Network and Distributed Systems Security Symposium (NDSS). CAMP: Content-Agnostic Malware Protection (USA, 2013). http://cs.jhu.edu/~moheeb/aburajab-ndss-13.pdf.
J. H. Metzen, M. C. Kumar, T. Brox, V. Fischer, in 2017 IEEE International Conference on Computer Vision (ICCV). Universal adversarial perturbations against semantic image segmentation, (2017), pp. 2774–2783. https://doi.org/10.1109/ICCV.2017.300.
M Bojarski, D Del Testa, D Dworakowski, B Firner, B Flepp, P Goyal, LD Jackel, M Monfort, U Muller, J Zhang, X Zhang, J Zhao, K Zieba, End to End Learning for Self-Driving Cars. CoRR. abs/1604.07316: (2016). http://arxiv.org/abs/1604.07316.
V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al, Human-level control through deep reinforcement learning. Nature. 518(7540), 529 (2015).
Article
Google Scholar
C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, R. Fergus, in International Conference on Learning Representations (ICLR). Intriguing properties of neural networks, (2014).
I. J. Goodfellow, J. Shlens, C. Szegedy, in International Conference on Learning Representations. Explaining and harnessing adversarial examples, (2015).
S. -M. Moosavi-Dezfooli, A. Fawzi, P. Frossard, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Deepfool: a simple and accurate method to fool deep neural networks, (2016), pp. 2574–2582.
J. Su, D. V. Vargas, K. Sakurai, One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput.23:, 828–841 (2019).
Article
Google Scholar
N. Carlini, D. Wagner, in 2017 IEEE Symposium on Security and Privacy (SP). Towards evaluating the robustness of neural networks (IEEESan Jose, 2017), pp. 39–57.
Chapter
Google Scholar
Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, et al, Gradient-based learning applied to document recognition. Proc. IEEE. 86(11), 2278–2324 (1998).
Article
Google Scholar
X. Yuan, P. He, Q. Zhu, X. Li, Adversarial examples: attacks and defenses for deep learning. IEEE Trans. Neural Netw. Learn. Syst.30:, 2805–2824 (2019). https://doi.org/10.1109/TNNLS.2018.2886017.
Article
MathSciNet
Google Scholar
R Huang, B Xu, D Schuurmans, C Szepesvári, Learning with a Strong Adversary. CoRR. abs/1511.03034: (2015). http://arxiv.org/abs/1511.03034.
N. Papernot, P. McDaniel, X. Wu, S. Jha, A. Swami, in 2016 IEEE Symposium on Security and Privacy (SP). Distillation as a defense to adversarial perturbations against deep neural networks (IEEESan Jose, 2016), pp. 582–597.
Chapter
Google Scholar
J. Bradshaw, A. G. G. Matthews, Z. Ghahramani, Adversarial examples, uncertainty, and transfer testing robustness in Gaussian process hybrid deep networks. arXiv preprint arXiv:1707.02476. 108: (2017).
M Abbasi, C Gagné, Robustness to Adversarial Examples through an Ensemble of Specialists. CoRR. abs/1702.06856: (2017). http://arxiv.org/abs/1702.06856.
R Feinman, RR Curtin, S Shintre, AB Gardner, Detecting Adversarial Samples from Artifacts. CoRR. abs/1703.00410: (2017). http://arxiv.org/abs/1703.00410.
K Grosse, P Manoharan, N Papernot, M Backes, PD McDaniel, On the (Statistical) Detection of Adversarial Examples. CoRR. abs/1702.06280: (2017). http://arxiv.org/abs/1702.06280.
D Hendrycks, K Gimpel, in 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Workshop Track Proceedings. Early Methods for Detecting Adversarial Images (OpenReview.net, 2017). https://openreview.net/forum?id=B1dexpDug.
A. N. Bhagoji, D. Cullina, P. Mittal, Dimensionality Reduction as a Defense against Evasion Attacks on Machine Learning Classifiers. CoRR. abs/1704.02654: (2017). http://arxiv.org/abs/1704.02654.
X. Li, F. Li, in Proceedings of the IEEE International Conference on Computer Vision. Adversarial examples detection in deep networks with convolutional filter statistics, (2017), pp. 5764–5772.
W. Xu, D. Evans, Y. Qi, in 25th Annual Network and Distributed System Security Symposium, NDSS 2018, San Diego, California, USA, February 18-21, 2018. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks (The Internet Society, 2018). http://wp.internetsociety.org/ndss/wp-content/uploads/sites/25/2018/02/ndss2018_03A-4_Xu_paper.pdf.
S. Gu, L. Rigazio, in 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Workshop Track Proceedings, ed. by Y. Bengio, Y. LeCun. Towards Deep Neural Network Architectures Robust to Adversarial Examples, (2015). http://arxiv.org/abs/1412.5068.
G. Katz, C. Barrett, D. L. Dill, K. Julian, M. J. Kochenderfer, in International Conference on Computer Aided Verification, ed. by R. Majumdar, V. Kuncak. Reluplex: an efficient SMT solver for verifying deep neural networks (Springer, 2017), pp. 97–117.
D. Gopinath, G. Katz, C. S. Păsăreanu, C. Barrett, in Proceedings of the 16th International Symposium on Automated Technology for Verification and Analysis (ATVA ’18). Lecture Notes in Computer Science, vol. 11138, ed. by S. Lahiri, C. Wang. DeepSafe: A Data-driven Approach for Assessing Robustness of Neural Networks (SpringerLos Angeles, 2018), pp. 3–19. https://doi.org/10.1007/978-3-030-01090-4_1.
Google Scholar
V. Zantedeschi, M. -I. Nicolae, A. Rawat, in Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. Efficient defenses against adversarial attacks (ACM, 2017), pp. 39–49.
F. Tramèr, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh, P. McDaniel, in 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. Ensemble Adversarial Training: Attacks and Defenses (OpenReview.net, 2018). https://openreview.net/forum?id=rkZvSe-RZ.
V. Zantedeschi, M. -I. Nicolae, A. Rawat, in Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, AISec@CCS 2017, Dallas, TX, USA, November 3, 2017, ed. by B. M. Thuraisingham, B. Biggio, D. M. Freeman, B. Miller, and A. Sinha. Efficient defenses against adversarial attacks (ACM, 2017), pp. 39–49. https://doi.org/10.1145/3128572.3140449.
Y. Song, T. Kim, S. Nowozin, S. Ermon, N. Kushman, in 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples (OpenReview.net, 2018). https://openreview.net/forum?id=rJUYGxbCW.
N. Carlini, D. Wagner, in Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. Adversarial examples are not easily detected: bypassing ten detection methods (ACMDallas, 2017), pp. 3–14.
Chapter
Google Scholar
N Carlini, G Katz, C. W. Barrett, D. L. Dill, Ground-Truth Adversarial Examples. CoRR. abs/1709.10207: (2017). http://arxiv.org/abs/1709.10207.
P. L. Garvin, On Machine Translation: Selected Papers, vol. 128 (Walter de Gruyter GmbH & Co KG, Netherlands, 2019).
Google Scholar
K. Hornik, M. Stinchcombe, H. White, Multilayer feedforward networks are universal approximators. Neural Netw.2(5), 359–366 (1989).
Article
Google Scholar
A. R. Barron, Approximation and estimation bounds for artificial neural networks. Mach. Learn.14(1), 115–133 (1994).
MATH
Google Scholar
Z. Lu, H. Pu, F. Wang, Z. Hu, L. Wang, in Advances in Neural Information Processing Systems. The expressive power of neural networks: a view from the width, (2017), pp. 6231–6239.
I. Goodfellow, Y. Bengio, A. Courville, Deep Learning (MIT press, Cambridge, MA, 2016). http://www.deeplearningbook.org.
MATH
Google Scholar
J. Bergstra, F. Bastien, O. Breuleux, P. Lamblin, R. Pascanu, O. Delalleau, G. Desjardins, D. Warde-Farley, I. Goodfellow, A. Bergeron, et al, in NIPS 2011, BigLearning Workshop, Granada, Spain, 3. Theano: deep learning on gpus with python (CiteseerGranada, 2011), pp. 1–48.
Google Scholar
A. Kurakin, I. Goodfellow, S. Bengio, in 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Workshop Track Proceedings. Adversarial examples in the physical world (OpenReview.net, 2017). https://openreview.net/forum?id=HJGU3Rodl.
A. Madry, A. Makelov, L. Schmidt, D. Tsipras, A. Vladu, in 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. Towards Deep Learning Models Resistant to Adversarial Attacks (OpenReview.net, 2018). https://openreview.net/forum?id=rJzIBfZAb.
N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, A. Swami, in 2016 IEEE European Symposium on Security and Privacy (EuroS&P). The limitations of deep learning in adversarial settings (IEEESaarbrücken, 2016), pp. 372–387.
Chapter
Google Scholar
X. Yuan, P. He, Q. Zhu, X. Li, Adversarial examples: attacks and defenses for deep learning. IEEE Trans. Neural Netw. Learn. Syst.30:, 2805–2824 (2019).
Article
MathSciNet
Google Scholar
S. -M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, P. Frossard, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Universal adversarial perturbations, (2017), pp. 1765–1773.
P. -Y. Chen, H. Zhang, Y. Sharma, J. Yi, C. -J. Hsieh, in Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. Zoo: zeroth order optimization based black-box attacks to deep neural networks without training substitute models (ACMDallas, 2017), pp. 15–26.
Chapter
Google Scholar
N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, A. Swami, in Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. Practical black-box attacks against machine learning, (2017), pp. 506–519.
Z. Zhao, D. Dua, S. Singh, in 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. Generating Natural Adversarial Examples (OpenReview.net, 2018). https://openreview.net/forum?id=H1BLjgZCb.
H. Cao, C. Wang, L. Huang, X. Cheng, H. Fu, in 2020 International Conference on Control, Robotics and Intelligent System. Adversarial DGA domain examples generation and detection, (2020), pp. 202–206.
S. Chen, D. Shi, M. Sadiq, X. Cheng, Image denoising with generative adversarial networks and its application to cell image enhancement. IEEE Access. 8:, 82819–82831 (2020). https://doi.org/10.1109/ACCESS.2020.2988284.
Article
Google Scholar
A. Krizhevsky, Learning Multiple Layers of Features from Tiny Images, 32–33 (2009). https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf. Accessed 21 Jan 2021.
Y. LeCun, C. Cortes, MNIST handwritten digit database (2010). http://yann.lecun.com/exdb/mnist/. Accessed 14 Jan 2016.
M. -I. Nicolae, M. Sinn, M. N. Tran, B. Buesser, A. Rawat, M. Wistuba, V. Zantedeschi, N. Baracaldo, B. Chen, H. Ludwig, I. Molloy, B. Edwards, Adversarial Robustness Toolbox v0.10.0. CoRR. abs/1807.01069: (2018). http://arxiv.org/abs/1807.01069.
C. R. Harris, K. J. Millman, S. J. van der Walt, R. Gommers, P. Virtanen, D. Cournapeau, E. Wieser, J. Taylor, S. Berg, N. J. Smith, R. Kern, M. Picus, S. Hoyer, M. H. van Kerkwijk, M. Brett, A. Haldane, J. F. del Río, M. Wiebe, P. Peterson, P. Gérard-Marchant, K. Sheppard, T. Reddy, W. Weckesser, H. Abbasi, C. Gohlke, T. E. Oliphant, Array programming with NumPy. Nature. 585(7825), 357–362 (2020). https://doi.org/10.1038/s41586-020-2649-2.
Article
Google Scholar
J. Rauber, W. Brendel, M. Bethge, Foolbox v0.8.0: A Python toolbox to benchmark the robustness of machine learning models. CoRR. abs/1707.04131: (2017). http://arxiv.org/abs/1707.04131.
G. W. Ding, L. Wang, X. Jin, advertorch v0.1: An Adversarial Robustness Toolbox based on PyTorch. CoRR. abs/1902.07623: (2019). http://arxiv.org/abs/1902.07623.
N. Morgulis, A. Kreines, S. Mendelowitz, Y. Weisglass, Fooling a Real Car with Adversarial Traffic Signs. CoRR. abs/1907.00374: (2019). http://arxiv.org/abs/1907.00374.