Fig. 5From: Deep neural rejection against adversarial examplesAdversarial examples computed on the MNIST data to evade DNN, NR, and DNR. The source image is reported on the left, followed by the (magnified) adversarial perturbation crafted with ε=1 against each classifier, and the resulting adversarial examples. We remind the reader that the attacks considered in this work are untargeted, i.e., they succeed when the attack sample is not correctly assigned to its true classBack to article page