Skip to main content
Fig. 3 | EURASIP Journal on Information Security

Fig. 3

From: Gaussian class-conditional simplex loss for accurate, adversarially robust deep classifier training

Fig. 3

Visual representation of the output distributions in the latent space on the three MNIST classes [0, 1, 9] (red, green, and blue colors respectively) without adversarial training for GCCS, cross-entropy (ND), Jacobian Regularization [53] (JR), Input Gradient Regularization [44] (IGR), and Cross Lipschitz Regularization [52] (CLR) methods : a–e when no adversarial attack is applied; f–j when applying FGSM; k–o when applying PGD (5 iterations); p–t when applying TGSM (5 iterations); u–y when applying JSMA (200 iterations, 1 pixel). To better understand the measure in which our method outperform the competitors, one should also consider the scale on the axis of each individual plot, which clearly show how the latent distributions for GCCS are significantly more distant in the latent space, hence separable

Back to article page