Skip to main content

Table 4 Maximum test accuracy obtained through regular training vs fine-tuning over different benchmark datasets with different competing techniques when no adversarial attack is performed

From: Gaussian class-conditional simplex loss for accurate, adversarially robust deep classifier training

Method

MNIST ResNet-18

FMNIST ResNet-18

SVHN ResNet-18

CIFAR-10 ResNet-18

CIFAR-10 Shake-Shake-96

CIFAR-100 Shake-Shake-112

GCCS (regular training)

99.58

92.69

94.20

82.97

96.19

76.53

GCCS (fine-tuning)

99.64

93.83

95.58

81.52

97.06

77.48

No Defense (cross-entropy loss)

99.35

91.91

94.12

78.59

95.78

76.30

Jacobian Reg. (regular training) [53]

98.99

91.79

94.11

70.09

-

-

Jacobian Reg. (fine-tuning) [53]

98.53

92.43

93.54

82.09

-

-

Input Gradient Reg. (regular training) [44]

97.98

88.45

93.77

78.32

96.50

74.89

Input Gradient Reg. (fine-tuning) [44]

99.11

92.55

93.17

76.15

96.90

75.68

Cross Lipschitz (regular training) [52]

96.78

92.54

91.42

80.10

-

-

Cross Lipschitz (fine-tuning) [52]

98.77

92.41

93.50

79.39

-

-