Towards Interpreting Deep Neural Networks via Understanding Layer Behaviors

Abstract

Deep neural networks (DNNs) have achieved unprecedented practical success in many applications. However, how to interpret DNNs is still an open problem. In particular, what do hidden layers behave is not clearly understood. In this paper, relying on a teacher-student paradigm, we seek to understand the layer behaviors of DNNs by “monitoring” both across-layer and single-layer distribution evolution. Here, the “across-layer” and “single-layer” considers the layer behavior along the depth and a specific layer along training epochs, respectively. Relying on optimal transport theory, we employ the Wasserstein distance (W-distance) to measure the divergence between the layer distribution and the target distribution. Theoretically, we prove that i) the W-distance between the distribution of any layer and the target distribution tends to decrease along the depth. ii) For a specific layer, the W-distance between the distribution in an iteration and the target distribution tends to decrease along training iterations. iii) However, a deep layer is not always better than a shallow layer for some samples. Moreover, our results helps to analyze the stability of layer distributions and explains why auxiliary losses help the training of DNNs. Extensive experiments justify our theoretical findings.

Publication
**
Jiezhang Cao
Jiezhang Cao
Ph.D. student

I am a lucky boy.

Related