Skip to main content
Fig. 5 | EURASIP Journal on Information Security

Fig. 5

From: Deep neural rejection against adversarial examples

Fig. 5

Adversarial examples computed on the MNIST data to evade DNN, NR, and DNR. The source image is reported on the left, followed by the (magnified) adversarial perturbation crafted with ε=1 against each classifier, and the resulting adversarial examples. We remind the reader that the attacks considered in this work are untargeted, i.e., they succeed when the attack sample is not correctly assigned to its true class

Back to article page