Differentiable Subset Pruning of Transformer Heads

Multi-head attention, a collection of several attention mechanisms that independently attend to different parts of the input, is the key ingredient in the Transformer. Recent work has shown, however, that a large proportion of the heads in a Transformer's multi-head attention mechanism can be safely pruned away without significantly harming the performance of the model; such pruning leads to models that are noticeably smaller and faster in practice. Our work introduces a new head pruning technique that we term differentiable subset pruning. Intuitively, our method learns per-head importance variables and then enforces a user-specified hard constraint on the number of unpruned heads. The importance variables are learned via stochastic gradient descent. We conduct experiments on natural language inference and machine translation; we show that differentiable subset pruning performs comparably or better than previous works while offering precise control of the sparsity level.


Introduction
The Transformer (Vaswani et al., 2017) has become one of the most popular neural architectures used in NLP. Adaptations of the Transformer have been applied to nearly every popular NLP task, e.g. parsing (Zhou and Zhao, 2019), machine translation , question answering (Yang et al., 2019) inter alia. Transformers also form the backbone of state-of-the-art pre-trained language models, e.g. BERT (Devlin et al., 2019), GPT-2 (Radford et al., 2019) and GPT-3 (Brown et al., 2020), that have further boosted performance on various data-driven NLP problems.
The key ingredient in the Transformer architecture is the multi-head attention mechanism, which is an assembly of multiple attention functions (Bahdanau et al., 2015) applied in parallel. In practice, each attention head works independently, which allows the heads to capture different kinds of linguistic phenomena (Clark et al., 2019;Goldberg, 2019;Ettinger, 2020;Jawahar et al., 2019). A natural question arises in this context: How many heads does a transformer need? Michel et al. (2019) offers the insight that a large portion of the Transformer's heads can be pruned without significantly degrading the test accuracy on the desired task. The experimental evidence behind their claim is a simple greedy procedure that sequentially removes heads. This suggests that a better pruner could reveal that a much larger portion of the heads can be safely removed. To provide a more robust answer to Michel et al.'s question, we build a high-performance pruner and show that their approach itself significantly underestimates the number of Transformer heads that can be pruned away.
From a bird's eye view, our paper contributes the proposal that Transformer head pruning is best viewed as a subset selection problem. Subset selection is common across many areas of NLP: from extractive summarization (Gillenwater et al., 2012) to vowel typology (Cotterell and Eisner, 2017). In the case of head pruning, the concrete idea is that the user specifies a number of heads K that they would like their Transformer to have depending on their budgetary and other constraints, and then the pruner enforces this constraint. Methodologically, we present a differentiable subset pruner ( Fig. 1) that makes use of Gumbel machinery; specifically the Gumbel top-K procedure of Vieira (2014). This construction allows us to relax our pruner into a differentiable sampling routine that qualitatively resembles a discrete analogue of dropout (Srivastava et al., 2014;Gal and Ghahramani, 2016).
Empirically, we perform experiments on two common NLP tasks: natural language inference (MNLI; Williams et al., 2018) and machine translation (IWSLT2014; Cettolo et al., 2014). We show that our differentiable subset pruning scheme outperforms two recently proposed Transformer head pruners- Michel et al. (2019) and Voita et al. (2019)-on both tasks in terms of sparsityperformance trade-off. Our method recovers a pruned Transformer that has ≈ 80% accuracy on MNLI and ≈ 30 BLEU score on IWSLT when more than 90% of the heads are removed, which brings about ≈ 33% inference speedup and ≈ 24% model size shrinkage. 2 Our experiments also suggest several broader conclusions about pruning Transformers. In this paper, we taxonomize existing pruning methods into two pruning paradigms: pipelined pruning and joint pruning. Pipelined pruning consists of two stages: i) training or fine-tuning an over-parameterized model on the target task and ii) pruning the model after training. A number of techniques fall into this category (LeCun et al., 1990;Hassibi et al., 1994;Molchanov et al., 2017b). In contrast, joint pruning blends the pruning objective into the training objective by training or fine-tuning the over-parameterized model with a sparsity-enforcing regularizer, sometimes followed up by a trivial post-processing step to arrive at a final sparse model. Kingma et al. (2015) and Louizos et al. (2018) are examples of this kind of pruning. We show that pipelined head pruning schemes, such as that of Michel et al., underperform compared to joint head pruning schemes, such as that of Voita et al. (2019). Our differentiable subset pruner can be adapted to both paradigms and it outperforms prior work in both, especially in high sparsity regions.

Background: Multi-head Attention
In this section, we provide a detailed overview of multi-head attention (Vaswani et al., 2017) in order to develop the specific technical vocabulary to discuss our approaches for head pruning. We omit details about other parts of the Transformer and refer the reader back to the original work of Vaswani et al. (2017). First, let z = z 1 , . . . , z T be a sequence of T real vectors where z t ∈ R d , and let q ∈ R d be a query vector. An attention mechanism is defined as 2 See § 5.4.
The projection matrices W o , W v , W q , W k ∈ R d×d are learnable parameters. In self-attention, query q comes from the same sequence z. A Transformer is composed of L identical layers. In layer 1 ≤ l ≤ L, H l different attention mechanisms are applied in parallel; importantly, it is this parallelism that has lead to the rise of the Transformer-it is a more efficient architecture in practice so it can be trained on more data. Each individual attention mechanism is referred to as a head; thus, multi-head attention is the simultaneous application of multiple attention heads in a single architecture. In Vaswani et al. (2017), the multiple heads are combined through summation: where att lh is the h th attention head in the l th layer.
We also introduce a gate variable g lh that takes values in the interval [0, 1]: Inserting g lh into the multi-head attention enables our pruning approach: setting the gate variable to g lh = 0 means the head att lh is pruned away. In the following sections, for the sake of notational simplicity, we ignore the layer structure of heads and label heads with a single index h ∈ {1, . . . , H}, where H = ∑ L l=1 H l is the total number of heads in the unpruned model.

Differentiable Subset Pruning
In this section, we propose a new head pruning technique that we term differentiable subset pruning.
The key insight behind our approach is that head pruning can be viewed as subset selection. Concretely, our goal is to find a subset of K heads (where K is a user-specified positive integer) that still allows the model to achieve high performance. Many neural network pruners, e.g. Voita et al. (2019)'s proposed head pruning technique, make it notably difficult to pre-specify the number of pruned heads K 3 . To make our subset pruner differentiable, we apply the Gumbel-softmax trick  and its extension to subset selection (Vieira, 2014;Xie and Ermon, 2019). This gives us a pruning scheme that always returns (a) Standard multi-head attention. Attention heads in the same layer l are combined by summation.

Gumbel Soft Top-K
(b) Multi-head attention with differentiable subset pruning. A gate g lh ∈ [0, 1] is applied on each head. The gate values are determined by Gumbel Soft top-K algorithm, which takes the head importance scores ι lh as inputs. The least "important" heads will have their gate values close to 0, so they are pruned in effect. the specified number of heads and can be applied in a pipelined or a joint setting. In both cases, the differentiability is necessary to learn the head weights.

Background: Gumbel-(soft)max
Let H = {1, . . . , H} be the set of Transformer heads in a given architecture. Our goal is to return a subset of heads J ⊆ H where |J | = K for any user-specified value of K. We use the notation ι h > 0 to denote a head importance score of the specific head h. The head importance score intuitively corresponds to how much we would like to have the head h in the subset of heads J .
We start our exposition by reviewing the Gumbel trick in the context of selecting a single head (K = 1) and then move onto discussing its extension to subset selection. Given the head importance scores ι h , suppose we would like to sample a subset J of size 1 according to the following distribution where Z = ∑ H h=1 ι h is the normalization constant. The simplest way to achieve this to use standard categorical sampling. However, as has been noted by Maddison et al. (2014), categorical sampling is not differentiable. Luckily, there is a two-step process to massage categorical sampling into a differentiable sampling procedure: 1) reparameterize the categorical using Gumbels and 2) soften the argmax into a softmax.

Step 1: Reparameterization
We can reparameterize categorical sampling using the Gumbel-max trick (Gumbel, 1954) to first separate the sampling from the parameter that we wish to differentiate with respect to. The idea of the Gumbel max trick is that categorical sampling can be viewed as a perturb-and-max method. If we first perturb the logits log(ι h ) with Gumbel noise n h ∼ Gumbel(0, 1) such that r h = log(ι h ) + n h , then sampling from a categorical is equivalent to taking an argmax: Were argmax differentiable, we would be done; unfortunately it is not.

3.1.2
Step 2: Relaxing the argmax Now to construct a fully differentiable procedure, we replace the argmax with a softmax. The intuition here is that the output of argmax may be viewed as an one-hot vector with the one corresponding to the index of the argmax. 4 The insight, then, is to relax the one-hot vector output by the argmax into a softmax as follows: This technique is called the Gumbel-softmax trick (Jang et al., 2017), and the resulting distribution is known as the Concrete distribution . 5 It is often desirable to add an additional annealing parameter τ > 0 to the Gumbelsoftmax: As the temperature tends to zero, i.e. τ → 0, the softmax turns into the argmax. Thus, through the tunable τ , we can arbitrarily approximate the argmax as a differentiable function.

Differentiable Subset Selection
The Gumbel trick can be generalized to cases where we wish to sample an entire set of heads. This is called the Gumbel-top-K trick. The idea is that, rather than simply taking the max, we sort and the take the top-K largest perturbed logits (Yellott, 1977;Vieira, 2014;Kool et al., 2019). One way to think of the algorithm is that we are repeating the Gumbel trick K times until we have the desired number of heads. Following the exposition in § 3.1, we divide our discussion into two sections.

Step 1: Reparameterization
Similar to the top-1 case, we start by sampling the first head using the perturb-and-max strategy: Then we remove h * 1 from the pool of heads under consideration and repeat the same procedure: The probability of sampling these heads in this order is given by the following expression: Thus, the probability of a set J is given by 5 Using the Gumbel-softmax results in a biased estimate of the gradient. Subsequent work removed this bias (Tucker et al., 2017).
where S K is the set of all permutations of K items. This is hard to compute as it involves a sum over permutations. For a detailed discussion on computing (13), we refer the reader to the discussion in Vieira (2021a) and Vieira (2021b). Ultimately, however, computing the exact probability of a subset of heads J is unnecessary for this approach.
As an aside, we note that this procedure is equivalent to a differentiable version of the classical reservoir sampling algorithm (Vitter, 1985).

Step 2: Relaxing the argmax
The Gumbel-top-K trick can be relaxed similarly to the top-1 case. This was first shown in detail by Xie and Ermon (2019). Here, we provide a detailed overview of the algorithm by analogy to the top-1 case. Similarly, the output of Gumbel-top-Kcan be viewed as a K-hot vector, which is the sum of the K one-hot vectors produced in (9)-(11). As before, we begin by relaxing the one-hot vector of the first head: This is a straight-forward analogue of the argmax relaxation discussion in § 3.1.2. Next, we continue relaxing the successive argmaxes with successive softmaxes (Plötz and Roth, 2018) as follows: Xie and Ermon (2019) argue that the above recursion corresponds to a reasonable relaxation of the Gumbel-top-K trick presented in § 3.2.1. To understand the motivation behind the recursion in (17), note that if g (k) h = 1, which would happen if the head has been sampled, i.e. no relaxation, then that head would not be selected again as we have r (k+1) h = −∞. As the scheme is a relaxation of hard sampling, we will not have g h is finite and τ > 0. Thus, the procedure corresponds to something akin to a soft sampling.
Finally, we sum over all the relaxed one-hot vectors g (k) h in (14)-(16) to arrive at our softened K-hot gate: It is (19) that we finally plug into the gated attention mechanism presented in (4).

Training the Subset Pruner
The differentiable subset pruning approach can be applied in either a pipelined or a joint pruning setting. (Please refer back to the last paragraph of § 1 for a discussion of the two different settings.) Our approach is parameterized identically in both settings, however. Specifically, we define head importance score as follows: where w h is the h th component of a vector of real-valued head weights w ∈ R H . In our setting, the distinction between pipelined pruning and joint pruning is relatively trivial. In the pipelined setting, we learn the head importance weights w for a model that has been trained on the task and leave the model parameters untouched. On the other hand, in the joint setting, we simultaneously learn the head importance weights and the model parameters. In this regard, our differentiable subset pruner much more closely resembles Voita et al. (2019)

Model and Data
We investigate two Transformer-based models in the empirical portion of the paper.
BERT. BERT (Bidirectional Encoder Representations from Transformers; Devlin et al., 2019) is essentially a Transformer encoder. Since there is no decoder part, BERT only has self-attention. We focus on the base-uncased model with 12 layers and 12 heads in each layer (144 heads in total).
We use the implementation of Hugging Face (Wolf et al., 2020). The model is pre-trained on large text corpora using masked language modeling (MLM) and next sentence prediction (NSP). We fine-tune BERT on the Multi-Genre Natural Language Inference (MNLI; Williams et al., 2018) corpus. The hyper-parameters are tuned on the "matched" validation set, and accuracy is reported on the "mismatched" validation set.
Enc-Dec. We implement a Transformer-based encoder-decoder model with 6 encoder layers, 6 decoder layers and 6 heads in each layer (72 heads in total). The model has three types of attention heads: encoder self-attention, decoder selfattention, and encoder-decoder cross attention. We use the fairseq toolkit  for our implementation. We train the model on the International Workshop on Spoken Language Translation (IWSLT2014; Cettolo et al., 2014) German-to-English dataset. The hyper-parameters are tuned on the validation set, and 4-gram BLEU scores computed with multi-bleu.perl (Koehn et al., 2007) are reported on the held-out test set. We use beam search with a beam size set to 5 for decoding.

Baselines
We compare our approach to pruners in both the pipelined and the joint paradigms. We refer to the pipelined version of our differentiable subset pruning as pipelined DSP and to the joint version as joint DSP. Our specific points of comparison are listed below. , the importance of a head is estimated with a gradient-based proxy score (Molchanov et al., 2017b): where L is the task-specific loss function. Then, all the heads in the model are sorted accordingly and removed one by one in a greedy fashion. The importance scores are re-computed every time a certain number of heads are removed.

Voita et al.
In the fashion of joint pruning, Voita et al. applies a stochastic approximation to L 0 regularization (Louizos et al., 2018) to the gates to encourage the model to prune less important heads. The gate variables are sampled from a binary Hard Concrete distribution (Louizos et al., 2018) independently, parameterized by φ h . The L 0 norm was relaxed into the sum of probability mass of gates being non-zero: which was then added to the task-specific loss L: where θ are the parameters of the original model, and λ is the weighting coefficient for the regularization, which we can use to indirectly control the number of heads to be kept.

Straight-Through Estimator (STE)
In this baseline, the Gumbel soft top-K in joint DSP is replaced with hard top-K, while the hard top-K function is back-propagated through as if it had been the identity function, which is also termed as straight-through estimator (Bengio et al., 2013).

Unpruned Model
The model is trained or fine-tuned without any sparsity-enforcing regularizer and no post-hoc pruning procedure is performed. We take this comparison to be an upper bound on the performance of any pruning technique.

Experimental Setup
Pipelined Pruning. For the two pipelined pruning schemes, the model is trained or fine-tuned on the target task (3 epochs for BERT and 60 epochs for Enc-Dec) before being pruned. We learn the head importance weights for pipelined DSP for one additional epoch in order to have an apples-toapples comparison with Michel et al. in terms of compute (number of gradients computed).
Joint Pruning. The model is trained or finetuned for the same number of epochs as pipelined pruning while sparsity-enforcing regularization is applied. We found it hard to tune the weighting coefficient λ for Voita et al. to reach the desired sparsity (see § 5.2 and Fig. 3). For the ease of comparison with other approaches, we adjust the number of unpruned heads to the targeted number by re-including heads with the highest gate values from the discarded ones, or excluding those with the smallest gate values in the kept ones. We make sure the adjustments are as small as possible. Annealing Schedule. In our experiments, we choose a simple annealing schedule for DSP where the temperature τ cools down in a log-linear scale within a predefined number of steps N cooldown from an initial temperature τ ini and then stays at the final temperature τ end for the rest of the training steps: where n is the number of training steps that has been run. We report the set of hyperparameters used in our experiments in App. A.

Results
The test performance under various sparsity levels obtained by multiple pruning methods are presented in Fig. 2a, Fig. 2b and App. C. We also zoom in to results when more than two-third of the heads are pruned in Fig. 2c and Fig. 2d, where the differences between the various methods are most evident.

Pipelined Pruning
We first compare the two pipelined pruning methods: Michel et al. (2019) and pipelined DSP. As shown in Fig. 2  Enc-Dec pruned with Michel et al. cannot produce meaningful outputs (≈ 0 BLEU score), while pipelined DSP achieves higher than 20 BLEU. The results indicate that the learned head importance scores are more useful for pruning than those computed with gradient-based measures.

Joint Pruning
We then compare the three joint pruning methods: Voita et al. (2019), STE and joint DSP. Impressively, joint DSP is able to prune up to 91.6% (12 heads left) and 94.4% (4 heads left) of heads in BERT and the Enc-Dec respectively without causing much degradation in test performance (5.5% drop in accuracy for MNLI and 4.22 drop in BLEU score for IWSLT). Voita et al. and STE are neck and neck with joint DSP when the model is lightly pruned, but joint DSP gains the upper hand when less than 1 /6 of the heads are left unpruned. In addition, with Voita et al.'s method, it is much harder to enforce a hard constraint on the number of unpruned heads. This difficulty is intrinsic to their method as Voita et al.'s method relies on the regularization coefficient λ to indirectly control the sparsity. In practice, our experiments indicate that λ is hard to tune and there are certain levels of sparsity that cannot be reached. The difficulty in tuning λ is shown in Fig. 3; we see that the number of unpruned heads does not decrease monotonically as λ increases; on the contrary, it often fluctuates. There also appears to be an upper bound (117) on the number of heads that can be kept no matter how small λ is. More importantly, a small increase in λ can sometimes drastically reduce the number of heads. For instance, when λ is increased from 0.0009 to 0.0014, the number of heads reduced quickly from 30 to 11. Therefore, we conclude that Voita et al.'s method is inadequate if the user requires a pre-specified number of Transformer heads. In contrast, DSP (as well as STE), our proposal, enables us to directly specify the number of heads we want to keep in accordance with our computation budget.

Pipelined Pruning vs Joint Pruning
Lastly, we offer a philosophical comparison of the two pruning paradigms. It is clear from Fig. 2 that the joint pruning methods are superior to pipelined pruning methods for both tasks, as models sparsified with the joint pruning schemes (joint DSP, STE and Voita et al.) perform better than those pruned with pipelined schemes (pipelined DSP and Michel et al.) under almost every sparsity level. This suggests that joint training is more effective in finding sparse subnetworks than pipelined pruning. Moreover, joint pruning is also more computationally efficient. In addition to the same number of epochs required by both paradigms for training/fine-tuning, pipelined pruning requires us to learn or estimate gradient-based head importance scores for one extra epoch. Even though joint pruning methods train H more parameters during training/fine-tuning, H is typically orders of magnitudes smaller than the total number of model parameters, so the additional computational overhead is negligible.

Inference Efficiency
In this section, we obtain the pruned model by actually removing the heads with mask values 0. Empirically, we observe substantial wallclock improvements in our pruned models compared to unpruned models. In practice, we found that the inference efficiency improves monotonically as the number of unpruned heads decrease and is not significantly impacted by the distribution of heads across layers. Taking BERT on MNLI-mismatched validation set (batch size of 8) as an example, we randomly sample 10 head masks for each sparsity level, measure their inference speedup and model size shrinkage compared to the unpruned model, and report the average in Fig. 4. In general, head pruning does lead to a faster and smaller model, and the more we prune, the faster and smaller the model becomes.
Comparison of various pruning schemes is displayed in Fig. 5   accuracy (e.g. 80%), joint DSP returns a model with a ≈ 33% speedup in execution time and ≈ 24% decrease in model size.

Distribution of Heads
We visualize the distribution of unpruned heads across different layers in Fig. 6. For BERT (Fig. 6a), we observe that the top layers (10-12) are the first to be pruned and the heads in the middle layers (3-7) are mostly retained. This observation is in conformity with Prasanna et al. (2020); Sajjad et al. (2021). Budhraja et al. (2020) also highlights the importance of middle layers but finds no preference between top and bottom layers. For Enc-Dec (Fig. 6b), we find that a lot more encoder-decoder cross attention heads are retained compared to the other two types of attentions (encoder and decoder self attentions). The encoder self-attention heads are completely pruned away when less than 16 heads are left, which again conforms with the observations of Michel et al. (2019) and Voita et al. (2019).

Analysis of Training Dynamics
To better understand our joint DSP approach, we inspect its behavior during training. We plot the intermediate accuracy of BERT during training when joint DSP (K = 12) is applied in Fig. 7a (in orange). We also compute the percentage of heads selected at the current step that are eventually kept in the end (in purple). We observe the selected subset of heads is no longer updated after 14000 training steps (purple line stays at 100%). There-fore, the joint pruning process may be viewed as having two distinct phases-i) head selection and ii) fine-tuning. This piques one's interest as it appears to superficially resemble a reversed pipelined pruning. During head selection, the subset of heads to be kept is determined and the model is adapted to the specified level of sparseness. During finetuning, the selected subnetwork is fine-tuned so that the testing accuracy improves steadily. Our experiments indicate that annealing is essential for training a high-performance pruner: It allows the model to gradually settle down on one particular subset of heads, whereas without annealing the pruner never converges to a fixed set and thereby does not enter the fine-tuning phase. See Fig. 7b for a visualization.

Summary
The five pruning methods discussed in this paper are summarized in Tab. 1. Joint DSP is able to maintain the highest test performance while consuming similar computational resources to Voita et al. and offering fine-grained control over the number of unpruned heads like Michel et al. It is worth noting that STE shares the same benefits of low computational overhead and exact sparsity control as joint DSP, despite being slightly inferior in performance. It also has less hyperparameters to tune and hence is easier to implement. Therefore, we believe STE could be favorable when test performance is not that critical. 6 We analyze other sparsity levels as well and observe simi-

Related Work
Unstructured Pruning. Neural network pruning has been studied for decades. Early works include optimal brain damage (LeCun et al., 1990) and optimal brain surgeon (Hassibi et al., 1994), which approximate the loss function of a trained model with a second-order Taylor expansion and remove certain parameters in the network while minimizing impact on loss. Recent years have seen a resurgence in this approach (Molchanov et al., 2017b;Theis et al., 2018;Michel et al., 2019). More recently, magnitude pruning that discards parameters with small absolute values has gained much popularity (Han et al., 2015Guo et al., 2016;Zhu and Gupta, 2018). Gordon et al. (2020) applies magnitude pruning to BERT and shows that the model has similar prunability and transferability whether pruned after pre-training or after finetuning. Related to magnitude based pruning is movement pruning introduced by Sanh et al. (2020) which considers changes in weights instead of magnitudes for pruning.
Structured Pruning. Different from abovementioned unstructured pruning methods that prune individual parameters, structured pruning methods prune at a higher level, such as convolutional channels, attention heads, or even layers. Structured pruning almost always leads to a decrease in model size and inference cost, while unstructured pruning often results in sparse matrices, which cannot be utilized without dedicated hardware or libraries . Previously, structured pruning had primarily been applied to convolutional neural networks (Wen et al., 2016;Luo et al., 2017;He et al., 2017;Liu et al., 2017;Huang and Wang, 2018), but it has recently been applied to NLP, in the form of layer pruning (Fan et al., 2020;Sajjad et al., 2021) and head pruning (Michel et al., 2019;Voita et al., 2019;McCarley et al., 2021) of Transformer-based models. Apart from compression and speedup, head pruning is also helpful for model analysis; Voita et al. (2019) finds that the heads that survive pruning play consistent and linguistically-interpretable roles. Prasanna et al. (2020) discovered the heads that are pruned last tend to be in the earlier and middle layers.
Dropout for Pruning. A variety of regularizers have been used to sparsify neural networks. For example, Han et al. (2015) applies L 1 regularization, and Louizos et al. (2018) applies L 0 regularization. Dropout, as one of the regularization methods, has also been demonstrated to be effective for converting a model to be robust to pruning. It was discovered that dropout encourages sparsity when dropout was proposed (Srivastava et al., 2014). Recently, the assumption that the model trained with dropout tend to be more robust to posthoc pruning was also explored. LayerDrop (Fan et al., 2020) randomly drops entire layers in Transformer with a fixed dropout rate during training

Conclusion
We propose differentiable subset pruning, a novel method for sparsifying Transformers. The method allows the user to directly specify the desired sparsity level, and it achieves a better sparsity-accuracy trade-off compared to previous works, leading to a faster and more efficient model after pruning. It demonstrates improvements over existing methods for pruning two different models (BERT and Enc-Dec) on two different tasks (textual entailment and machine translation), respectively. It can be applied in both pruning paradigms (pipelined and joint pruning). Although we study head pruning in the paper, our approach can be extended to other structured and unstructured pruning scenarios. In future work, it would be interesting to look into such cases.

A Experimental Setup
We report the hyperparameters for joint DSP we use in our experiments in Tab. 2, which are obtained by tuning on the validation set.

B Analysis of Training Dynamics
We present two more examples where heads are scarce (K = 8) or redundant (K = 108). In Fig. 8a, we observe the same two-phase training behavior as K = 12. The selected subset of heads is not altered anymore after 16000 steps. In Fig. 8c, unlike the cases where heads are very few, the head masks are constantly updated throughout the training procedure. Yet a large portion (91.7%) of the heads remain unchanged after 17000 steps. Its twophase behavior is still apparent in comparison with training without annealing (Fig. 8d).

C Detailed Results
The detailed results for plotting Fig. 2    The lower x-axis shows the number of training steps, and the upper x-axis shows the corresponding temperature in logarithm scale. Left y-axis (orange) shows test accuracy on MNLI-mismatched validation set. Right y-axis (purple) shows the percentage of heads selected at current step that are kept eventually.