Channel Pruning for Accelerating Convolutional Neural Networks via Wasserstein Metric
Haoran Duan (University of Science and Technology of China (USTC))*, Hui Li (University of Science and Technology of China (USTC))
Keywords: Optimization Methods
Abstract:
Channel pruning is an effective way to accelerate deep convolutional neural networks. However, it is still a challenge to reduce the computational complexity while preserving the performance of deep models. In this paper, we propose a novel channel pruning method via the Wasserstein metric. First, the output features of a channel are aggregated through the Wasserstein barycenter, which is called the basic response of the channel. Then the channel discrepancy based on the Wasserstein distance is introduced to measure channel importance, by considering both the channel's feature representation ability and the substitutability of the basic responses. Finally, channels with the least discrepancies are removed directly, and the loss in accuracy of the pruned model is regained by fine-tuning. Extensive experiments on popular benchmarks and various network architectures demonstrate that the proposed approach outperforms the existing methods.
SlidesLive
Similar Papers
To filter prune, or to layer prune, that is the question
Sara Elkerdawy (University of Alberta)*, Mostafa Elhoushi (Huawei Technologies), Abhineet Singh (University of Alberta), Hong Zhang (University of Alberta), Nilanjan Ray (University of Alberta)

Bridging Adversarial and Statistical Domain Transfer via Spectral Adaptation Networks
Christoph Raab (FHWS)*, Philipp Väth (FHWS), Peter Meier (FHWS), Frank-Michael Schleif (FHWS)

Towards Optimal Filter Pruning with Balanced Performance and Pruning Speed
Dong Li (Nuctech)*, Sitong Chen (Nuctech), Xudong Liu (Nuctech), Yunda Sun (Nuctech), Li Zhang (Nuctech)
