Skip to main content

Table 3 Classification accuracy on the MNIST test dataset under the white-box attack

From: Secure machine learning against adversarial samples at test time

Attack

FGSM

C&W

BIM

DeepFool

Original

29%

7%

7%

29%

Adversarial training (ART)

98%

49%

98%

42%

Robust classifier

100%

98%

99%

99%