Skip to main content
Fig. 5 | EURASIP Journal on Information Security

Fig. 5

From: Machine learning through cryptographic glasses: combating adversarial attacks by key-based diversified aggregation

Fig. 5

Explanation of multi-channel randomization: given a training data set, the defender introduces a random perturbation \(\protect \phantom {\dot {i}\!}{\boldsymbol {\epsilon }}^{d_{l}}\), 1≤lL, to each sample \(\{\boldsymbol {\mathrm {x}}_{i}\}^{M}_{i=1}\) and trains L classifiers. Since the perturbation is known at training, i.e., all samples obtain the stationary “bias” by \(\protect \phantom {\dot {i}\!}{\boldsymbol {\epsilon }}^{d_{l}}\) for the same classifier l, the randomization has a limited impact on the classifier performance. Since the attacker has no access to the defender’s perturbations and all of them are equilikely, an equivalent manifold for the attacker is expanded thus leading to higher entropy and thus increasing the learning complexity

Back to article page