Abstract
Non-AutoRegressive (NAR) text generation models have drawn much attention because of their significantly faster decoding speed and good generation quality in machine translation. However, in a wider range of text generation tasks, existing NAR models lack proper pre-training, making them still far behind the pre-trained autoregressive models. In this paper, we propose Pre-trained Directed Acyclic Transformer (PreDAT) and a novel pre-training task to promote prediction consistency in NAR generation. Experiments on five text generation tasks show that our PreDAT remarkably outperforms existing pre-trained NAR models (+4.2 score on average) and even achieves better results than pre-trained autoregressive baselines in n-gram-based metrics, along with 17 times speedup in throughput. Further analysis shows that PreDAT benefits from the unbiased prediction order that alleviates the error accumulation problem in autoregressive generation, which provides new insights into the advantages of NAR generation.1
1 Introduction
Pre-trained language models have been widely applied in text generation (Radford et al., 2019; Song et al., 2019; Lewis et al., 2020; Raffel et al., 2020), which can effectively improve the performance of downstream generation tasks, especially in low-resource scenarios (Brown et al., 2020). Most of these pre-trained language models are based on AutoRegressive (AR) generation, which produces high-quality texts by predicting each token one by one. However, such a sequential generation process suffers from high latency and low throughput in inference, thereby largely limiting the use of AR models in scenarios with real-time requirements.
Non-AutoRegressive (NAR) generation is an alternative text generation paradigm (Gu et al., 2018). Unlike sequential generation in AR models, NAR models predict all tokens in parallel, which largely accelerates the decoding process. Although early NAR models suffer from serious quality degradation due to the independent token prediction, recent NAR studies have made much progress on some generation tasks, such as machine translation (Qian et al., 2021; Gu and Kong, 2021; Huang et al., 2022a). Notably, Huang et al. (2022c) propose Directed Acyclic Transformer, which incorporates a directed acyclic graph to reduce the conflicts in capturing possible outputs, achieving a comparable translation quality to the AR models.
Despite the success of NAR generation in machine translation, it is still challenging to apply NAR models to a wider range of generation tasks, mainly due to the lack of appropriate pre-training. Although some previous studies have explored pre-training methods such as directly fine-tuning BERT for NAR generation (Guo et al., 2020b; Su et al., 2021; Jiang et al., 2021) or pre-training NAR models from scratch (Qi et al., 2021; Li et al., 2022), their models still have a significant quality gap compared with AR ones. We argue that these methods do not fully exploit the characteristic of NAR generation, thereby restricting downstream performance. Specifically, we discuss two main issues: (1) Previous pre-training tasks are ineffective in promoting sentence-level prediction consistency, making it hard for their models to predict a whole sentence simultaneously while preserving the fluency in downstream NAR generation. (2) Previous pre-training tasks fail to address the multi-modality problem (Gu et al., 2018), which has proved to be a fundamental and important challenge in training NAR models (Huang et al., 2022b).
In this paper, we introduce PreDAT, a Pre-trained Directed Acyclic Transformer for high-quality non-autoregressive text generation. We utilize the architecture of Directed Acyclic Transformer and further propose a novel pre-training task, Double-Source Text Infilling (DSTI), aiming to address the above issues in pre-trained NAR models. Specifically, DSTI contains two steps: It corrupts a sentence and scatters the tokens into two sequences, which are fed into the encoder and decoder as two sources of information; then the model is trained to recover the corrupted fragments non-autoregressively. During the pre-training, our model predicts long sentence fragments (about 15 tokens) from nearby contexts, which promotes prediction consistency and bidirectional dependencies. Moreover, DSTI designs a strategy for creating pre-training data pairs that allow the output sequences to have flexible lengths, which well incorporates various alignment-based NAR training objectives to alleviate the multi-modality problem (Libovický and Helcl, 2018; Ghazvininejad et al., 2020; Du et al., 2021; Huang et al., 2022c).
Automatic evaluation shows that PreDAT is effective and efficient on five text generation tasks. It remarkably outperforms previous pre-trained NAR models (+4.2 score on average) and even achieves better results than pre-trained AR baselines in n-gram-based metrics (+0.7 score on average), along with a 17x speedup in throughput. To our knowledge, PreDAT is the first NAR model that outperforms pre-trained AR models on various generation tasks in automatic evaluation. Further ablation studies verify that our pre-training task designs, including the long fragment prediction and alignment-based training objectives, are crucial for success.
To better understand the advantages and weaknesses of NAR generation, we use automatic and manual methods to investigate the generated texts in downstream tasks. We find that PreDAT can alleviate the error accumulation in AR generation and improve the relevance to the input, thereby leading to a better performance in n-gram-based metrics. However, we also find that NAR models, including PreDAT, are still weaker than AR models in preserving the consistency among generated tokens, leading to grammatical errors such as wrong word choices. We believe that these findings can provide novel insights for future NAR studies.
2 Related Work
Pre-trained Language Models (PLM)
In recent years, PLMs have made significant progress in natural language generation (Radford et al., 2019; Song et al., 2019; Lewis et al., 2020; Raffel et al., 2020). These PLMs are pre-trained on a large corpus of unlabeled data, where the knowledge can be transferred to downstream tasks, resulting in improved generation quality.
2.2. Non-Autoregressive Generation
Although NAR generation (Gu et al., 2018) remarkably speeds up the inference, Huang et al. (2022b) point out that it theoretically suffers from serious information dropping, previously known as the multi-modality problem. To alleviate the problem, previous studies propose methods including (1) iterative refinement (Lee et al., 2018; Gu et al., 2019; Ghazvininejad et al., 2019; Guo et al., 2020a; Huang et al., 2022d); (2) knowledge distillation (Kim and Rush, 2016; Ding et al., 2022, 2021a, b; Shao et al., 2022); (3) dependency enhancement (Sun et al., 2019; Qian et al., 2021; Huang et al., 2022a; Bao et al., 2022); or (4) alignment-based objectives (Ghazvininejad et al., 2020; Du et al., 2021; Libovický and Helcl, 2018; Huang et al., 2022c).
There are also studies combining PLMs and NAR generation. For example, some methods fine-tune existing pre-trained models directly (Jiang et al., 2021) or with an adapter (Guo et al., 2020b; Su et al., 2021). Some others combine AR and NAR prediction (Qi et al., 2021) or involve an early exiting mechanism (Li et al., 2022) in pre-training.
Compared with these studies, our method has two significant differences: (1) Previous methods either predict short spans (e.g., BERT) or incorporate unidirectional AR prediction (Qi et al., 2021), which hardly contribute to NAR generation that predicts a whole sentence with bidirectional attention. In contrast, we train our model to predict long fragments simultaneously, leading to better consistency among generated tokens. (2) Previous methods use a token-level loss that forces the model to predict a same-length sequence to match the target, which over-penalizes the position shift error (Ghazvininejad et al., 2020) and worsens the multi-modality problem. We introduce an up-sampling strategy to obtain longer output sequences, which well incorporates previous alignment-based NAR losses to address the above problems.
3 Preliminaries: Directed Acyclic Transformer
Compared with previous NAR models, DAT explicitly models the dependencies between tokens by the position transitions and is able to store multiple modalities on different paths of the DAG, thereby remarkably improving the generation performance. Moreover, various decoding algorithms such as beam search and Nucleus sampling (Holtzman et al., 2020) can be utilized to boost the generation quality or diversity.
Besides DAT, there are other alignment-based objectives that succeed in alleviating the multi-modality problem in NAR generation, such as AXE (Ghazvininejad et al., 2020), OaXE (Du et al., 2021), and CTC (Graves et al., 2006; Libovický and Helcl, 2018). In general, these objectives are also obtained by aligning the target Y with the feature sequence V, thus denoted by (V, Y ).
4 Proposed Method
In this section, we introduce PreDAT, Pretrained Directed Acyclic Transformer. We first propose the pre-training task (Section 4.1) and then describe the fine-tuning and inference strategies (Section 4.2).
4.1 Pre-training Task
Our pre-training task, Double-Source Text Infilling (DSTI), is a self-supervised pre-training task that aims to promote prediction consistency and bidirectional dependencies for NAR models. Our task scatters part of a sentence into two sequences, feeds them into the encoder and decoder as two sources of information, and then trains the model to predict long unseen fragments in a non-autoregressive fashion. Although DSTI is compatible with various NAR architectures and losses, we mainly focus on DAT due to its superior performance.
As shown in Figure 2, our task takes a piece of text from the pre-training corpus and decomposes it into a triple (X, Z, Y ), where X = {x1, ⋯, xM} is the encoder input, Z = {z1, ⋯, zL} is the decoder input, and Y = {y1, ⋯, yN} is the target. The data preparation consists of two stages.
Stage 1: Creating Encoder Input
We utilize span masking (Raffel et al., 2020) to obtain the encoder input X and the target Y. Specifically, we randomly mask tokens in the original sentence, and then replace consecutive masks into a single special token representing the span ID. Then the prediction target Y is constructed by concatenating the masked spans with the span IDs as delimiters.
Specially, we force each masked span to be long enough (about 15 tokens) because the NAR model has to generate a whole sentence simultaneously in inference, where predicting short spans is unhelpful in preserving sentence-level consistency.
Stage 2: Creating Decoder Input
The decoder input Z plays two roles in our pre-training: (1) It reveals some target tokens to promote bidirectional dependencies in the decoder. (2) It determines the length of the predicted feature sequence.
To incorporate the alignment-based NAR losses that require a longer feature sequence than the target (such as DAT and CTC), we create the decoder input Z by an up-sampling step. Then we assign a part of the target tokens to appropriate positions in Z, where the unseen tokens will be used as prediction targets. Specifically, creating Z follows three steps: up-sampling, assigning, and re-masking.
For up-sampling, we decide the length of Z based on the target length. Formally, we have L : = λN, where λ is an up-sampling ratio. In DAT, varying L can bring different DAG sizes and structures, where we sample λ from a uniform distribution to diversify the DAG structures in pre-training. After determining the length, the span IDs are put into Z according to the up-sampling ratio, which will not be modified in the later steps.
For assigning, we distribute the target tokens in Z, regardless of whether the token will appear in the final input. Formally, we use an assignment sequence {ai}1≤i≤N indicating that . All other positions in Z are masked. For obtaining the sequence {ai}, a straightforward strategy is to use uniform assignment, such that every two consecutive target tokens are separated by a constant number of [Mask]. In the pilot experiment, we find it better to use the strategy of glancing training (Huang et al., 2022c; Qian et al., 2021), which first predicts a DAG with a fully masked Z and then assigns the target tokens on the positions that form the most probable path of the DAG.
For re-masking, we determine the tokens finally appearing in Z and then mask the remaining ones. Formally, we randomly sample a fixed proportion of tokens to form a set R, where if i ∈ R, and all the other tokens in Z are masked.
Training Objective
Our objective is to reconstruct the unseen target fragments according to the given context, similar to masked language modelling (Devlin et al., 2019) but with a significant difference. Instead of using a token-level loss that forces each masked position to predict the corresponding target token, we obtain the sum of alignment-based losses that aligns each unseen target fragment to the feature sequence predicted on the corresponding masked segments. Note that the feature sequence is longer than the target fragment, which brings a larger DAG with a higher capacity to capture multiple possible in-filling results.
Notably, our idea can be applied to other alignment-based NAR losses, such as CTC loss (Graves et al., 2006), which also trains the model by aligning the target fragment to a longer predicted feature sequence. We verify the generality of DSTI with various loss functions in Section 5.4.
4.2 Fine-tuning and Inference
We generally follow the original training method (Huang et al., 2022c) to fine-tune our PreDAT on the downstream datasets while introducing some improvements: We add a target length predictor for better adaption to tasks with various ratios of input and target lengths, and further propose a trick to improve the decoding throughput.
Length Prediction
The original DAT simply sets the feature length L to be a constant multiple of the input length, which in most cases of machine translation, satisfies the constraint that the feature sequence should be longer than the target length. However, the targets in our downstream tasks can be arbitrarily long, making this strategy improper.
To better apply PreDAT to various generation tasks without the constraint of input and target length, we introduce a length predictor during fine-tuning and inference. Specifically, in fine-tuning, we use a similar up-sampling strategy as the pre-training to obtain the decoder input length, i.e., λ times the target length. Then we adopt a length predictor on the top of the encoder and train it to predict the target length as a classification. In inference, we obtain the predicted length from the predictor, and then multiply it with to obtain the decoder input length, where is a hyper-parameter tuned on the validation set that controls the length of generated sentences.
Overlapped Decoding
PreDAT predicts the DAG in parallel on GPU, and then executes a decoding algorithm (e.g., beam search; Huang et al., 2022c) on CPUs to obtain the most likely output from the DAG. As shown in Figure 3, we overlap the GPU and CPU execution, which reduces the GPU idle time and utilizes multiple CPU cores to parallelly process the batches, leading to remarkably higher decoding throughput while not affecting the latency.
5 Experiments
5.1 Implementation Details
Model Configurations
Our PreDAT is based on a 6-layer encoder-decoder Transformer (Vaswani et al., 2017) with a hidden size of 768, following the base version of AR and NAR baselines.
Pre-Training
We pretrain PreDAT with DSTI on 16GB English corpus from Wikipedia and BookCorpus (Zhu et al., 2015), with the vocabulary of bert-base-uncased. In stage 1, we take a sequence with about 600 tokens and mask 6 equal-length spans that account for 15% tokens. In stage 2, we sample λ uniformly from [4,8] and mask 90% tokens in the re-masking step. Unless otherwise specified, we pre-train PreDAT for 500k update steps with a batch size of 256 samples and use Adam optimizer (Kingma and Ba, 2015) with a learning rate of 2e-4. We utilize LightSeq (Wang et al., 2022) to accelerate the training (not used in inference), and the pre-training lasts approximately 72 hours on 8 Nvidia A100-40G GPUs.
Fine-Tuning
We fine-tune PreDAT on downstream datasets with the DAT loss and glancing training (Qian et al., 2021; Huang et al., 2022c) without knowledge distillation. According to the average sample lengths of each dataset, each mini-batch has approximately 4k target tokens for PersonaChat, XSUM, SQuAD1.1, and 8k target tokens for ROCStory and Quora. We use the early-stop trick according to the performance on the validation set. It usually takes less than 60k steps on SQuAD1.1, Quora, and PersonaChat, and 100k steps on XSUM and ROCStory. We tune the glancing ratio from {0.3, 0.5}, and learning rate from {1e-5, 2e-5, 5e-5, 1e-4, 2e-4}. We evaluate the model every 5k steps on the validation set and obtain the final model by averaging the five best checkpoints.
Inference
We utilize lookahead decoding (default unless otherwise specified) and beamsearch (Huang et al., 2022c) to decode a sequence from predicted DAG. We use a beam size of 200 and incorporate a 5-gram LM in the beam search. For open-ended generation, we further employ Nucleus sampling (Holtzman et al., 2020).
For these three decoding strategies, we prevent any repeated tri-gram in expanding the decoding path on the DAG, which is inspired by a similar strategy used in autoregressive decoding (Paulus et al., 2018). Moreover, we also prevent consecutive uni-gram and bi-gram repetitions, which are common errors in PreDAT’s outputs.
5.2 Experiment Settings
Datasets and Metrics
We utilize five datasets: SQuAD1.1 (Rajpurkar et al., 2016), XSUM (Narayan et al., 2018), Quora2, PersonaChat (Zhang et al., 2018), and ROCStory (Mostafazadeh et al., 2016). We use the processed datasets and the evaluation metrics from previous work, as shown in Table 1. Note that we use corpus BLEU (Papineni et al., 2002) on all datasets because the sentence BLEU may unreasonably prefer very long outputs due to the smoothing method.3
Dataset . | Task . | # Samples . | Length . |
---|---|---|---|
SQuAD1.1♠ | Question Generation | 75k/10k/12k | 149.4/11.5 |
XSUM♠ | Summarization | 204k/11k/11k | 358.5/21.2 |
Quora♡ | Paraphrase Generation | 138k/5k/4k | 11.5/11.5 |
PersonaChat♠ | Dialog Generation | 122k/15k/14k | 120.8/11.8 |
ROCStory♣ | Story Generation | 88k/5k/5k | 9.2/41.6 |
Dataset . | Task . | # Samples . | Length . |
---|---|---|---|
SQuAD1.1♠ | Question Generation | 75k/10k/12k | 149.4/11.5 |
XSUM♠ | Summarization | 204k/11k/11k | 358.5/21.2 |
Quora♡ | Paraphrase Generation | 138k/5k/4k | 11.5/11.5 |
PersonaChat♠ | Dialog Generation | 122k/15k/14k | 120.8/11.8 |
ROCStory♣ | Story Generation | 88k/5k/5k | 9.2/41.6 |
To evaluate the decoding speedup, we use two metrics: Latency measures the average time of processing a single sample, and throughput measures the average speed in processing the whole test set, where we tune the batch size to maximize the throughput. All models except MIST are implemented with Fairseq (Ott et al., 2019) + Apex, where MIST is implemented with HuggingFace’s Transformers (Wolf et al., 2019). For the beam search algorithm on DAG, we adopt the C++ implementation provided by Huang et al. (2022c). The C++ optimization only affects the extra decoding step on the CPU, but does not speedup the transformer model. All results of speed are evaluated on a workstation with an Nvidia V100-32G GPU and 2 Intel Xeon Gold 6226R CPUs with 32 cores.
Baselines
Our baselines include autoregressive Transformer (Vaswani et al., 2017), pre-trained AR models (MASS, Song et al., 2019; BART, Lewis et al., 2020; ProphetNet, Qi et al., 2020), non-pretrained NAR models (Vanilla NAT, Gu et al., 2018); GLAT+CTC, Qian et al., 2021; DSLP+CTC, Huang et al., 2022a; LatentGLAT, Bao et al., 2022), and pre-trained NAR models (BANG, Qi et al., 2021; MIST, Jiang et al., 2021). All these baselines have the same number of layers and hidden sizes as our PreDAT, except that LatentGLAT utilizes a 4-layer latent predictor and a 4-layer decoder based on the original implementation. Note that CTC-based models also require an up-sampling strategy, so we add a length predictor following the description of Section 4.2. Their up-sampling ratio is sampled from [1.5,2] in training and tuned on the validation set in inference. For AR baselines, unless otherwise specified, we use BeamSearch with a beam size of 5 and the tri-gram repetition prevention trick (Paulus et al., 2018), and tune the length penalty on the validation set. For NAR baselines, we use greedy decoding and further remove consecutive repeated tokens after generation (Li et al., 2019). Some results are collected from Liu et al. (2021); Qi et al. (2021); Jiang et al. (2021).
5.3 Automatic Evaluation
Closed-Ended Text Generation
We first test PreDAT on three closed-ended text generation tasks, including question generation, summarization, and paraphrase generation. Closed-ended text generation tasks usually have strict semantic constraints on the outputs, aiming to test the model’s ability to extract and organize information.
As shown in Table 2, PreDAT achieves surprisingly good results in both speed and quality. We highlight our advantages as follows:
PreDAT remarkably improves the quality of NAR generation. Compared with previous pretrained NAR models, PreDAT brings large improvement (+4.2 scores on average) due to our DSTI pre-training and the DAT architecture. Moreover, PreDAT even outperforms the best AR baseline by 0.7 scores. To our knowledge, it is the first time that an NAR model achieves comparable and even stronger performance than AR models in n-gram-based metrics on a wide range of text generation tasks.
PreDAT is highly efficient. Although our model is slightly slower than previous NAR models due to a longer sequence prediction, it still achieves a speedup of 5∼14 times in latency and 17∼23 times in throughput compared with AR generation. It verifies that PreDAT can largely reduce computing consumption in decoding, showing its potential for real-time applications.
Model . | Pre-trained? . | SQuAD1.1 . | XSUM . | Quora . | Avg. . | Latency . | Throughput . | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
R-L . | B-4 . | MTR . | R-1 . | R-2 . | R-L . | B-1 . | B-4 . | MTR . | ms/sample . | samples/s . | |||||
Autoregressive Text Generation Models | |||||||||||||||
Transformer | N | 29.43 | 4.61 | 9.86 | 30.66 | 10.80 | 24.48 | 58.57 | 30.14 | 31.79 | 25.59 | − | − | ||
MASS | Y | 49.48 | 20.16 | 24.41 | 39.70 | 17.24 | 31.91 | 60.56 | 32.39 | 32.92 | 34.31 | 353 | (1.0x) | 12 | (1.0x) |
BART | Y | 42.55 | 17.06 | 23.19 | 38.79 | 16.16 | 30.61 | 61.56 | 31.57 | 32.42 | 32.66 | − | − | ||
ProphetNet | Y | 48.00 | 19.58 | 23.94 | 39.89 | 17.12 | 32.07 | 62.59 | 33.80 | 33.95 | 34.55 | − | − | ||
Non-autoregressive Text Generation Models | |||||||||||||||
Vanilla NAT | N | 31.51 | 2.46 | 8.86 | 24.04 | 3.88 | 20.32 | 39.85 | 9.33 | 18.90 | 17.68 | − | − | ||
GLAT+CTC | N | 30.31 | 3.21 | 10.21 | 31.34 | 9.06 | 24.68 | 58.96 | 26.67 | 30.55 | 25.00 | 24 | (14.7x) | 267 | (21.5x) |
DSLP+CTC | N | 28.70 | 3.00 | 10.59 | 28.75 | 7.35 | 22.73 | 61.12 | 29.70 | 32.37 | 24.92 | 24 | (14.7x) | 265 | (21.4x) |
LatentGLAT | N | 28.28 | 2.38 | 10.43 | 28.44 | 7.00 | 22.66 | 59.78 | 28.30 | 31.26 | 24.28 | 28 | (12.8x) | 334 | (27.0x) |
BANG | Y | 44.07 | 12.75 | 18.99 | 32.59 | 8.98 | 27.41 | 55.18 | 24.97 | 25.99 | 27.88 | 18 | (19.6x) | 360 | (29.0x) |
MIST | Y | 47.13 | 16.00 | 21.10 | 34.63 | 11.29 | 28.70 | 59.65 | 29.00 | 31.56 | 31.01 | 22 | (15.9x) | 159 | (12.8x) |
PreDAT (Ours) | Y | 49.78 | 21.74 | 24.58 | 38.80 | 16.07 | 31.78 | 62.63 | 32.59 | 33.37 | 34.59 | 26 | (13.8x) | 278 | (22.5x) |
w/ BeamSearch | Y | 50.41 | 22.66 | 25.11 | 39.79 | 17.38 | 32.71 | 62.62 | 33.18 | 33.52 | 35.26 | 63 | (5.7x) | 214 | (17.3x) |
w/o Pre-training | N | 30.11 | 3.30 | 10.32 | 32.56 | 11.17 | 26.21 | 59.82 | 28.17 | 31.10 | 25.86 | 25 | (14.3x) | 272 | (21.9x) |
Model . | Pre-trained? . | SQuAD1.1 . | XSUM . | Quora . | Avg. . | Latency . | Throughput . | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
R-L . | B-4 . | MTR . | R-1 . | R-2 . | R-L . | B-1 . | B-4 . | MTR . | ms/sample . | samples/s . | |||||
Autoregressive Text Generation Models | |||||||||||||||
Transformer | N | 29.43 | 4.61 | 9.86 | 30.66 | 10.80 | 24.48 | 58.57 | 30.14 | 31.79 | 25.59 | − | − | ||
MASS | Y | 49.48 | 20.16 | 24.41 | 39.70 | 17.24 | 31.91 | 60.56 | 32.39 | 32.92 | 34.31 | 353 | (1.0x) | 12 | (1.0x) |
BART | Y | 42.55 | 17.06 | 23.19 | 38.79 | 16.16 | 30.61 | 61.56 | 31.57 | 32.42 | 32.66 | − | − | ||
ProphetNet | Y | 48.00 | 19.58 | 23.94 | 39.89 | 17.12 | 32.07 | 62.59 | 33.80 | 33.95 | 34.55 | − | − | ||
Non-autoregressive Text Generation Models | |||||||||||||||
Vanilla NAT | N | 31.51 | 2.46 | 8.86 | 24.04 | 3.88 | 20.32 | 39.85 | 9.33 | 18.90 | 17.68 | − | − | ||
GLAT+CTC | N | 30.31 | 3.21 | 10.21 | 31.34 | 9.06 | 24.68 | 58.96 | 26.67 | 30.55 | 25.00 | 24 | (14.7x) | 267 | (21.5x) |
DSLP+CTC | N | 28.70 | 3.00 | 10.59 | 28.75 | 7.35 | 22.73 | 61.12 | 29.70 | 32.37 | 24.92 | 24 | (14.7x) | 265 | (21.4x) |
LatentGLAT | N | 28.28 | 2.38 | 10.43 | 28.44 | 7.00 | 22.66 | 59.78 | 28.30 | 31.26 | 24.28 | 28 | (12.8x) | 334 | (27.0x) |
BANG | Y | 44.07 | 12.75 | 18.99 | 32.59 | 8.98 | 27.41 | 55.18 | 24.97 | 25.99 | 27.88 | 18 | (19.6x) | 360 | (29.0x) |
MIST | Y | 47.13 | 16.00 | 21.10 | 34.63 | 11.29 | 28.70 | 59.65 | 29.00 | 31.56 | 31.01 | 22 | (15.9x) | 159 | (12.8x) |
PreDAT (Ours) | Y | 49.78 | 21.74 | 24.58 | 38.80 | 16.07 | 31.78 | 62.63 | 32.59 | 33.37 | 34.59 | 26 | (13.8x) | 278 | (22.5x) |
w/ BeamSearch | Y | 50.41 | 22.66 | 25.11 | 39.79 | 17.38 | 32.71 | 62.62 | 33.18 | 33.52 | 35.26 | 63 | (5.7x) | 214 | (17.3x) |
w/o Pre-training | N | 30.11 | 3.30 | 10.32 | 32.56 | 11.17 | 26.21 | 59.82 | 28.17 | 31.10 | 25.86 | 25 | (14.3x) | 272 | (21.9x) |
Open-Ended Text Generation
We further test PreDAT on two open-ended text generation tasks, dialog generation and story generation. Open-ended text generation tasks encourage the model to produce novel and diverse outputs, where sampling decoding methods are commonly adopted to promote generation diversity.
Therefore, in addition to lookahead decoding and beamsearch, we also introduce Nucleus sampling (Holtzman et al., 2020). Specifically, we set p = 0.9 and the temperature τ = 1 for PreDAT. For MASS and BART, we also use p = 0.9, but τ = 0.8 on PersonaChat and τ = 0.7 on ROCStory to achieve similar diversity as PreDAT.
We present the evaluation results in Table 3 and the trade-off of quality and diversity by tuning the temperature in Figure 4. Generally, the comparison of quality metrics is similar to closed-ended generation: PreDAT largely outperforms NAR baselines and achieves comparable BLEU scores to AR models. Moreover, we highlight two findings:
PreDAT generates plausible outputs in open-ended tasks while previous NAR models cannot. Open-ended generation tasks usually have targets with diverse expressions, which worsens the multi-modality problem and seriously degrades the NAR generation quality. Specifically, MIST shows very low diversity because it generates numerous repetitions, and BANG shows very high diversity because it introduces many incomprehensible n-grams. In contrast, PreDAT has a reasonable quality-diversity trade-off, showing its ability to address the serious challenges brought by the multi-modality problem.
PreDAT achieves a flexible quality and diversity trade-off. As shown in Figure 4, PreDAT is slightly better than two AR baselines w.r.t. the trade-off curves by tuning the decoding temperature. It demonstrates that PreDAT can meet the diversity requirement of open-ended text generation, verifying its generality in text generation.
Model . | Pre-trained? . | PersonaChat . | ROCStory . | Latency . | Throughput . | |||||
---|---|---|---|---|---|---|---|---|---|---|
B-1 . | B-2 . | D-1 . | D-2 . | B-1 . | B-2 . | D-4 . | ms/sample . | samples/s . | ||
Autoregressive Text Generation Models | ||||||||||
Transformer | N | 18.37 | 8.07 | 1.43 | 10.04 | 30.68 | 14.67 | 35.18 | 168 (1.1x) | 28 (1.1x) |
MASS | Y | 26.82 | 14.70 | 1.20 | 7.58 | 35.02 | 16.96 | 51.20 | 180 (1.0x) | 25 (1.0x) |
w/ Sampling | Y | 23.90 | 12.13 | 1.85 | 13.09 | 32.56 | 14.97 | 73.72 | 130 (1.4x) | 77 (3.0x) |
BART | Y | 26.84 | 14.69 | 1.39 | 8.85 | 35.45 | 17.22 | 49.03 | 199 (0.9x) | 23 (0.9x) |
w/ Sampling | Y | 24.00 | 12.31 | 1.97 | 14.50 | 33.95 | 15.28 | 73.62 | 143 (1.3x) | 69 (2.7x) |
Non-autoregressive Text Generation Models | ||||||||||
Vanilla NAT | N | 18.33 | 6.37 | 0.43 | 0.96 | 28.44 | 11.29 | 89.13 | 23 (7.8x) | 703 (27.7x) |
BANG | Y | 17.38 | 7.33 | 2.12 | 23.02 | 29.38 | 11.78 | 92.10 | 18 (10.1x) | 649 (25.6x) |
MIST | Y | 18.55 | 8.86 | 0.54 | 2.56 | 23.57 | 9.09 | 8.15 | 25 (7.3x) | 330 (13.0x) |
PreDAT (Ours) | Y | 27.06 | 15.05 | 1.33 | 8.31 | 34.11 | 17.17 | 57.50 | 24 (7.6x) | 507 (20.0x) |
w/ Sampling | Y | 24.23 | 12.29 | 1.77 | 15.62 | 32.52 | 15.61 | 74.37 | 24 (7.4x) | 514 (20.3x) |
w/ BeamSearch | Y | 27.31 | 15.39 | 1.15 | 6.30 | 34.61 | 17.84 | 50.55 | 48 (3.7x) | 318 (12.6x) |
w/o Pre-training | N | 21.96 | 10.38 | 0.52 | 3.29 | 31.81 | 15.41 | 52.97 | 25 (7.2x) | 562 (22.2x) |
Model . | Pre-trained? . | PersonaChat . | ROCStory . | Latency . | Throughput . | |||||
---|---|---|---|---|---|---|---|---|---|---|
B-1 . | B-2 . | D-1 . | D-2 . | B-1 . | B-2 . | D-4 . | ms/sample . | samples/s . | ||
Autoregressive Text Generation Models | ||||||||||
Transformer | N | 18.37 | 8.07 | 1.43 | 10.04 | 30.68 | 14.67 | 35.18 | 168 (1.1x) | 28 (1.1x) |
MASS | Y | 26.82 | 14.70 | 1.20 | 7.58 | 35.02 | 16.96 | 51.20 | 180 (1.0x) | 25 (1.0x) |
w/ Sampling | Y | 23.90 | 12.13 | 1.85 | 13.09 | 32.56 | 14.97 | 73.72 | 130 (1.4x) | 77 (3.0x) |
BART | Y | 26.84 | 14.69 | 1.39 | 8.85 | 35.45 | 17.22 | 49.03 | 199 (0.9x) | 23 (0.9x) |
w/ Sampling | Y | 24.00 | 12.31 | 1.97 | 14.50 | 33.95 | 15.28 | 73.62 | 143 (1.3x) | 69 (2.7x) |
Non-autoregressive Text Generation Models | ||||||||||
Vanilla NAT | N | 18.33 | 6.37 | 0.43 | 0.96 | 28.44 | 11.29 | 89.13 | 23 (7.8x) | 703 (27.7x) |
BANG | Y | 17.38 | 7.33 | 2.12 | 23.02 | 29.38 | 11.78 | 92.10 | 18 (10.1x) | 649 (25.6x) |
MIST | Y | 18.55 | 8.86 | 0.54 | 2.56 | 23.57 | 9.09 | 8.15 | 25 (7.3x) | 330 (13.0x) |
PreDAT (Ours) | Y | 27.06 | 15.05 | 1.33 | 8.31 | 34.11 | 17.17 | 57.50 | 24 (7.6x) | 507 (20.0x) |
w/ Sampling | Y | 24.23 | 12.29 | 1.77 | 15.62 | 32.52 | 15.61 | 74.37 | 24 (7.4x) | 514 (20.3x) |
w/ BeamSearch | Y | 27.31 | 15.39 | 1.15 | 6.30 | 34.61 | 17.84 | 50.55 | 48 (3.7x) | 318 (12.6x) |
w/o Pre-training | N | 21.96 | 10.38 | 0.52 | 3.29 | 31.81 | 15.41 | 52.97 | 25 (7.2x) | 562 (22.2x) |
5.4 Ablation Study
In this section, we conduct ablation studies to reveal how our designs contribute to the results.
Loss Function
In PreDAT, we utilize the DAT loss to alleviate the multi-modality problem, which plays an important role in the pre-training. Notably, our pre-training task can be combined with other NAR losses, so we compare the DAT loss against CTC (Graves et al., 2006; Libovický and Helcl, 2018) and the token-level cross-entropy loss (CE).
Specifically, the same loss function is applied in both pre-training and fine-tuning to avoid discrepancies between the two training stages. For CTC, we randomly sample the up-sampling ratio from [1.5,2]. For CE, we do not use up-sampling (i.e., λ = 1) because the CE loss requires an output sequence with the same length as the target.
As shown in Figure 5, we find: (1) It is important to incorporate alignment-based NAR losses in pre-training, where CTC and DAT losses bring substantial improvements compared with the CE loss. (2) The NAR model pre-trained with CE still outperforms previous pre-trained NAR baselines, verifying the effectiveness of our pre-training task in preserving sentence-level consistency and promoting bidirectional dependencies.
5.4.2. Pre-training Strategy
Our proposed pre-training task includes several strategies for constructing the training data pair. To evaluate the effects of these strategies, we design four groups of comparisons as follows, whose results are shown in Figure 6.
(a) Stage 1: Encoder Masking. Besides Span masking, we use two other strategies including Token masking that independently samples masked positions (Devlin et al., 2019), and Sequence masking that masks a single consecutive sequence. All strategies mask the same ratio of tokens. We conclude that the masked spans should not be too short (about 1∼3 tokens in token masking) or too long (about 90 tokens in sequence masking), which prevents the NAR model from learning prediction consistency or make the prediction too difficult.
(b) Stage 2, Step 1: Up-sample Ratios. We compare the random sampling ratio (4∼8x) against fixed up-sampling ratios (4x and 8x). We find that random up-sampling can diversify the DAG structure, which works as a data augmentation method and thus benefits the downstream performance.
(c) Stage 2, Step 2: Assignment Strategies. Besides the proposed assignment strategy according to the path probability (MaxProb), we use Uniform and Random assignment that assigns the target into the decoder input uniformly or randomly. We find the MaxProb assignment can better determine the lengths of each masked segment according to the model’s own prediction, leading to slightly better results than the other strategies.
(d) Stage 2, Step 3: Re-masking Strategies. Besides the Fixed masking strategy, we also try Adaptive and Adaptive + Annealing masking strategies proposed by Qian et al. (2021), where they adjust the masking ratio by the difficulties of the sample. It shows that these strategies have similar performance, outperforming the fully masked decoder input (All Masked), which verifies the importance of introducing information in the decoder input for bidirectional dependency modelling. However, the adaptive masking is less effective in pre-training, so we use the fixed masking ratio for simplicity.
Up-sampling Ratio in Fine-tuning
As described in Section 4.2, we obtain the decoder input length in fine-tuning by up-sampling. To investigate how the up-sampling strategies affect performance, we evaluate different combinations of up-sampling ratios in pre-training and fine-tuning.
As shown in Figure 7, random up-sampling always benefits the performance in pre-training and fine-tuning, together bringing an improvement of about 1.2 scores. It indicates that varying the DAG size is an important trick in training PreDAT. Moreover, the up-sampling ratios in pre-training and fine-tuning do not need to be the same, which can be helpful if smaller DAG sizes are preferred in downstream tasks due to limited memory budget.
Overlapped Decoding
Overlapped decoding aims to improve the decoding throughput by overlapping the execution of DAG prediction and beam search decoding. To verify its effectiveness, we evaluate the speedup with various batch sizes on XSUM.
As shown in Figure 8, our overlapped decoding brings a 17.3x speedup with a batch size of 32, largely outperforming the vanilla one. We also note that throughput starts to decline as batch size increases, possibly because the introduced paddings increase the consumption of invalid computations.
5.5 Analysis
In this section, we investigate the reasons why PreDAT achieves better automatic scores than pre-trained AR baselines, which may provide some insights for future NAR generation studies.
PreDAT Alleviates Error Accumulation.
Error accumulation is a major concern of autoregressive generation (Bengio et al., 2015; Ranzato et al., 2016; Arora et al., 2022), where a prediction error may be propagated into later decoding steps, leading to low-quality generated sentences. In contrast, NAR models naturally avoid the problem due to their unbiased prediction order.
As shown in Figure 9, we find that MASS has a strong tendency to predict earlier generated tokens more accurately than later generated ones, which applies to both left-to-right and right-to-left models. In contrast, our PreDAT shows no significant preference for any positions because it predicts all tokens simultaneously, which reveals the advantages of unbiased prediction order in NAR generation models.
PreDAT Improves the Relevance to the Input.
Previous studies empirically found that AR generated texts may lose relevance to the input sentences, which is also known as hallucination (Maynez et al., 2020; Ji et al., 2022) or off-prompt errors (Dou et al., 2022). One explanation is that AR models may be distracted by its generated prefixes, which can be avoided in NAR generation (Huang et al., 2021).
To verify our hypothesis, we introduce two metrics to evaluate the relevance to inputs: Knowledge F1 (Shuster et al., 2021) and PARENT-T (Dhingra et al., 2019; Wang et al., 2020). Knowledge F1 measures the unigram F1 between generated sentences and the input knowledge, while PARENT-T measures n-gram entailment. Both metrics require the extraction of knowledge pieces that should appear in the generated sentences. For simplicity, we take each sentence in the passage (of XSUM) or the persona profile (of PersonaChat) as a piece of knowledge and further filter out the stop words.
As shown in Table 4, PreDAT achieves better precision on both datasets in using the input knowledge compared with MASS (+1.2 on average). It indicates that PreDAT is less likely to produce irrelevant keywords, justifying our hypothesis that the NAR model can better concentrate on the input. However, we also notice that PreDAT and MASS have comparable performance on recall, showing that it is still challenging to cover more keywords.
Dataset . | Model . | Knowledge . | PARENT-T . | ||||
---|---|---|---|---|---|---|---|
P . | R . | F1 . | P . | R . | F1 . | ||
XSUM | MASS | 35.1 | 9.7 | 14.7 | 35.1 | 8.5 | 13.1 |
PreDAT | 36.3 | 9.9 | 14.9 | 36.4 | 8.6 | 13.3 | |
PersonaChat | MASS | 19.6 | 17.2 | 17.8 | 13.2 | 11.3 | 11.5 |
PreDAT | 21.1 | 17.7 | 18.5 | 13.8 | 11.0 | 11.5 |
Dataset . | Model . | Knowledge . | PARENT-T . | ||||
---|---|---|---|---|---|---|---|
P . | R . | F1 . | P . | R . | F1 . | ||
XSUM | MASS | 35.1 | 9.7 | 14.7 | 35.1 | 8.5 | 13.1 |
PreDAT | 36.3 | 9.9 | 14.9 | 36.4 | 8.6 | 13.3 | |
PersonaChat | MASS | 19.6 | 17.2 | 17.8 | 13.2 | 11.3 | 11.5 |
PreDAT | 21.1 | 17.7 | 18.5 | 13.8 | 11.0 | 11.5 |
5.6 Manual Evaluation
Although PreDAT shows surprising performance in automatic evaluation, it is still questionable whether these automatic metrics are reliable when comparing AR and NAR models. In this sec tion, we conduct a manual evaluation that compares PreDAT against pre-trained AR and NAR baselines.
Settings
We compare PreDAT against three baselines, two NAR models (BANG and MIST) and an AR model (MASS). We randomly selected 150 samples in SQuAD1.1, accounting for 600 generated sentences for the four models. For each sample, three annotators were asked to rank the outputs from two dimensions: grammaticality measures whether the output contains any grammatical errors, and appropriateness measures whether the output is reasonable for the given context.
Results
The results are shown in Table 5, where we highlight two findings: (1) PreDAT achieves a significant quality improvement over previous NAR models, where annotators highly prefer PreDAT (with Win% + Tie% > 83%). (2) There is still a quality gap between PreDAT and the AR model. Although PreDAT achieves higher word overlap in automatic evaluation, it exhibits poorer grammaticality in human ratings. A possible reason is that PreDAT preserves better relevance to the inputs, leading to the higher word overlap, however, is still weaker than AR models in preserving the consistency among generated tokens.
. | Grammaticality . | Appropriateness . | ||||||
---|---|---|---|---|---|---|---|---|
Win . | Tie . | Lose . | κ . | Win . | Tie . | Lose . | κ . | |
Comparison against Non-autoregressive Models | ||||||||
vs. BANG | 75.3** | 12.0 | 12.7 | 0.66 | 69.8** | 17.3 | 12.9 | 0.59 |
vs. MIST | 66.7** | 18.0 | 15.3 | 0.50 | 57.1** | 26.0 | 16.9 | 0.47 |
Comparison against Autoregressive Models | ||||||||
vs. MASS | 15.1 | 47.8 | 37.1** | 0.32 | 32.2 | 36.7 | 31.1 | 0.46 |
. | Grammaticality . | Appropriateness . | ||||||
---|---|---|---|---|---|---|---|---|
Win . | Tie . | Lose . | κ . | Win . | Tie . | Lose . | κ . | |
Comparison against Non-autoregressive Models | ||||||||
vs. BANG | 75.3** | 12.0 | 12.7 | 0.66 | 69.8** | 17.3 | 12.9 | 0.59 |
vs. MIST | 66.7** | 18.0 | 15.3 | 0.50 | 57.1** | 26.0 | 16.9 | 0.47 |
Comparison against Autoregressive Models | ||||||||
vs. MASS | 15.1 | 47.8 | 37.1** | 0.32 | 32.2 | 36.7 | 31.1 | 0.46 |
5.6.3. Typical Errors and Case Study
To better understand how PreDAT makes errors, we investigate the typical errors in the generated outputs. Specifically, we randomly chose 100 samples from SQuAD1.1, collected the outputs of the four models, and then manually annotated the errors in these outputs.
Figure 10 presents the proportions of error types. In terms of grammaticality, we find that PreDAT addresses the major problems in previous NAR models, such as incomprehensible outputs and repetitions, well. However, there are still some word errors, which affect only a small fragment of the sentence but are very obvious to human readers, leading to the unsatisfying result. We believe the problem can be alleviated by post-editing or iterative refinement, which we leave for future work. In terms of appropriateness, PreDAT has comparable performance to MASS in error distributions, showing its ability to extract and organize information to form appropriate outputs.
To support the above discussions, we show some output cases in Table 6. We find that previous NAR models usually generate low-quality texts, whereas PreDAT achieves significant improvement. Moreover, PreDAT maintains a strong relevance to the inputs, yet it can occasionally introduce grammatical errors. In contrast, MASS generates plausible outputs, but they may not always be faithful. This observation highlights the distinctive behaviors between AR and NAR models.
6 Limitations
Although PreDAT achieves a significant advancement in NAR generation, it still faces the following limitations:
(1) Although PreDAT achieves superior performance in automatic evaluation, it still significantly underperforms AR models in grammaticality according to human evaluation (as discussed in Section 5.6). This inconsistency can be attributed to the different biases of AR and NAR models: AR models tend to generate fluent outputs but may sacrifice relevance to the input, while NAR models prioritize relevance but may incur grammatical errors. It is important to take the behavior into consideration when applying PreDAT to real-world applications.
(2) PreDAT may struggle with capturing long-range coherence, because NAR models are inherently weak in modeling token dependencies, and PreDAT is pre-trained only on predicting 15-token-long fragments. Notably, our experiments are conducted on relatively short text generation (whose length statistics are shown in Table 1), and the performance on longer text generation tasks requires further investigation.
(3) Compared with AR models, PreDAT requires more GPU memory during inference and takes more time in fine-tuning (typically 2∼4 times in our experiments). This is because PreDAT’s decoder has to process a much longer sequence.
7 Conclusion
In this paper, we propose a pre-training task to promote sentence-level consistency and bidirectional dependencies for NAR generation. We demonstrate that combining the state-of-the-art NAR models with appropriate pre-training can lead to efficient and high-quality text generation on a wide range of tasks, where our PreDAT largely outperforms previous NAR pre-trained models in generation quality. We further show that, compared with AR models, PreDAT alleviates error accumulation and enhances relevance to inputs, but still introduces non-negligible grammatical problems, thereby providing new insights into the strengths and weaknesses of NAR generation.
Acknowledgments
This paper was supported by the National Science Foundation for Distinguished Young Scholars (with grant no. 62125604) and the Guoqiang Institute of Tsinghua University, with grant no. 2020GQG0005. We are grateful to the action editor and the anonymous reviewers for their valuable suggestions and feedback.
Notes
Our code and pre-trained models are available at https://github.com/thu-coai/DA-Transformer.
Some previous work (Liu et al., 2021) utilize nltk’s sentence BLEU with SmoothingFunction().method7.
References
Author notes
Action Editor: Alexander Rush