Abstract
Neural network pruning is a popular approach to reducing the computational costs of training and/or deploying a network and aims to do so while minimizing accuracy loss. Pruning methods that remove individual weights (fine granularity) can remove more total network parameters before reaching a given degree of accuracy loss, while methods that preserve some or all of a network’s structure (coarser granularity, such as pruning channels from a CNN) take better advantage of hardware and software optimized for dense matrix computations. We compare intelligent iterative pruning using several different criteria sampled from the literature against random pruning at initialization across multiple granularities on two different architectures and three image classification tasks. Our work is the first direct and comprehensive investigation of the relationship between granularity and the efficacy of intelligent pruning relative to a random-pruning baseline. We find that the accuracy advantage of intelligent over random pruning decreases dramatically as granularity becomes coarser, with minimal advantage for intelligent pruning at granularity coarse enough to fully preserve network structure. For instance, at pruning rates where random pruning leaves ResNet-20 at 85.0% test accuracy on CIFAR-10 after 30,000 training iterations, intelligent weight pruning with the best-in-context criterion leaves it at about 90.0% accuracy (on par with the unpruned network), kernel pruning leaves it at about 86.5%, and channel pruning leaves it at about 85.5%. Our results suggest that compared to coarse pruning, fine pruning combined with efficient implementation of the resulting networks is a more promising direction for easing the trade-off between high accuracy and low computational cost.