Skip to main content

Table 5 F-score obtained through regular training vs fine-tuning over different benchmark datasets with different competing techniques when no adversarial attack is performed

From: Gaussian class-conditional simplex loss for accurate, adversarially robust deep classifier training

Method

MNIST ResNet-18

FMNIST ResNet-18

SVHN ResNet-18

CIFAR-10 ResNet-18

CIFAR-10 Shake-Shake-96

CIFAR-100 Shake-Shake-112

GCCS (regular training)

99.58

92.66

94.17

82.93

96.18

76.49

GCCS (fine-tuning)

99.64

93.80

95.28

81.46

97.05

77.72

No Defense (cross-entropy loss)

99.35

91.88

93.7

78.59

95.77

76.55

Jacobian Reg. (regular training) [53]

98.98

91.73

93.68

69.32

-

-

Jacobian Reg. (fine-tuning) [53]

98.51

92.41

93.24

82.2

-

-

Input Gradient Reg. (regular training) [44]

97.96

88.51

93.26

78.70

96.58

76.24

Input Gradient Reg. (fine-tuning) [44]

99.08

92.38

92.62

76.39

96.98

75.59

Cross Lipschitz (regular training) [52]

96.64

92.52

90.55

80.15

-

-

Cross Lipschitz (fine-tuning) [52]

98.75

92.39

92.97

79.22

-

-