# The Robustness of Deep Features

This leads us to believe that a hardened network抯 robustness is mainly due to robust deep feature representations and robustness is preserved if we re-train on top of deep features.

# Low-Data Regime

• Transferred：Training a fully connected layer on top of the frozen feature extractors of the Free-4 robust ImageNet model using natural examples from the target task.
• Free trained：Free-training/adversarial training the target domain.
• Fine-tuned with AT：Fine-tuning using adversarial examples of the target task starting from the Free-4 robust ImageNet model.

（对每张图，从右往左看）数据量较小时，Transferred 在三张图上的表现均最优，随着数据量增大，Fine-tuned的鲁棒性提升到最高。

1. 图中实验的最大样本数量仅有 250张/类，仍比较小，所以 Free trained 的 Accuracy 比较低。
2. 在这个实验中，Transferred 的训练速度大约比 Fine-tuned 快三倍。
3. 这个实验中，迁移学习仅仅重新训练了一层的权重。

# Training Deeper Networks on Top of Robust Feature Extractors

• Adding more layers can result in a slight drop in validation accuracy due to overfitting.
• Adding more layers on top of this extractor does not hurt robustness. （可能是因为网络容量的提升）
• Simply adding more layers does not improve the validation accuracy and just results in more overfitting (i.e. training accuracy becomes 100%). We can slightly improve generalization using batch norm (BN) and dropout (DO).

# Adversarially robust transfer learning with the LwF loss

$$\min_{w, \theta} l(z(x, \theta), y, w)+\lambda_{d} \cdot d(z(x, \theta), z_{0}(x, \theta_{r}^{*}))$$

• $\lambda_d$：惩罚项
• $z_{0}(x, \theta_{r}^{*})$：Robust 模型产生的 representation
• $z(x, \theta)$：训练模型权重产生的 representation
• $d(\cdot)$：可以是 $l_2$ 距离

﻿
----------over----------
﻿