Abstract
It is intuitive that semantic representations can be useful for machine translation, mainly because they can help in enforcing meaning preservation and handling data sparsity (many sentences correspond to one meaning) of machine translation models. On the other hand, little work has been done on leveraging semantics for neural machine translation (NMT). In this work, we study the usefulness of AMR (abstract meaning representation) on NMT. Experiments on a standard English-to-German dataset show that incorporating AMR as additional knowledge can significantly improve a strong attention-based sequence-to-sequence neural translation model.
1 Introduction
It is intuitive that semantic representations ought to be relevant to machine translation, given that the task is to produce a target language sentence with the same meaning as the source language input. Semantic representations formed the core of the earliest symbolic machine translation systems, and have been applied to statistical but non-neural systems as well.
Leveraging syntax for neural machine translation (NMT) has been an active research topic (Stahlberg et al., 2016; Aharoni and Goldberg, 2017; Li et al., 2017; Chen et al., 2017; Bastings et al., 2017; Wu et al., 2017; Chen et al., 2018). On the other hand, exploring semantics for NMT has so far received relatively little attention. Recently, Marcheggiani et al. (2018) exploited semantic role labeling (SRL) for NMT, showing that the predicate–argument information from SRL can improve the performance of an attention-based sequence-to-sequence model by alleviating the “argument switching” problem,1 one frequent and severe issue faced by NMT systems (Isabelle et al., 2017). Figure 1(a) shows one example of semantic role information, which only captures the relations between a predicate (gave) and its arguments (John, wife, and present). Other important information, such as the relation between John and wife, cannot be incorporated.
In this paper, we explore the usefulness of abstract meaning representation (AMR) (Banarescu et al., 2013) as a semantic representation for NMT. AMR is a semantic formalism that encodes the meaning of a sentence as a rooted, directed graph. Figure 1(b) shows an AMR graph, in which the nodes (such as give-01 and John) represent the concepts and edges (such as :ARG0 and :ARG1) represent the relations between concepts they connect. Comparing with semantic roles, AMRs capture more relations, such as the relation between John and wife (represented by the subgraph within dotted lines). In addition, AMRs directly capture entity relations and abstract away inflections and function words. As a result, they can serve as a source of knowledge for machine translation that is orthogonal to the textual input. Furthermore, structural information from AMR graphs can help reduce data sparsity when training data is not sufficient for large-scale training.
Recent advances in AMR parsing keep pushing the boundary of state-of-the-art performance (Flanigan et al., 2014; Artzi et al., 2015; Pust et al., 2015; Peng et al., 2015; Flanigan et al., 2016; Buys and Blunsom, 2017; Konstas et al., 2017; Wang and Xue, 2017; Lyu and Titov, 2018; Peng et al., 2018; Groschwitz et al., 2018; Guo and Lu, 2018), and have made it possible for automatically generated AMRs to benefit down-stream tasks, such as question answering (Mitra and Baral, 2015), summarization (Takase et al., 2016), and event detection (Li et al., 2015a). However, to our knowledge, no existing work has exploited AMR for enhancing NMT.
We fill in this gap, taking an attention-based sequence-to-sequence system as our baseline, which is similar to Bahdanau et al. (2015). To leverage knowledge within an AMR graph, we adopt a graph recurrent network (GRN) (Song et al., 2018; Zhang et al., 2018) as the AMR encoder. In particular, a full AMR graph is considered as a single state, with nodes in the graph being its substates. State transitions are performed on the graph recurrently, allowing substates to exchange information through edges. At each recurrent step, each node advances its current state by receiving information from the current states of its adjacent nodes. Thus, with increasing numbers of recurrent steps, each word receives information from a larger context. Figure 3 shows the recurrent transition, where each node works simultaneously. Compared with other methods for encoding AMRs (Konstas et al., 2017), GRN keeps the original graph structure, and thus no information is lost (Song et al., 2018). For the decoding stage, two separate attention mechanisms are adopted in the AMR encoder and sequential encoder, respectively.
Experiments on WMT16 English-to-German data (4.17M) show that adopting AMR significantly improves a strong attention-based sequence-to-sequence baseline (25.5 vs 23.7 BLEU). When trained with small-scale (226K) data, the improvement increases (19.2 vs 16.0 BLEU), which shows that the structural information from AMR can alleviate data sparsity when training data are not sufficient. To our knowledge, we are the first to investigate AMR for NMT.
Our code and parallel data (training/dev/test) with automatically parsed AMRs are available at https://github.com/freesunshine0316/semantic-nmt.
2 Related Work
Most previous work on exploring semantics for statistical machine translation (SMT) studies the usefulness of predicate–argument structure from semantic role labeling (Wong and Mooney, 2006; Wu and Fung, 2009; Liu and Gildea, 2010; Baker et al., 2012). Jones et al. (2012) first convert Prolog expressions into graphical meaning representations, leveraging synchronous hyperedge replacement grammar to parse the input graphs while generating the outputs. Their graphical meaning representation is different from AMR under a strict definition, and their experimental data are limited to 880 sentences. We are the first to investigate AMR on a large- scale machine translation task.
Recently, Marcheggiani et al. (2018) investigated SRL on NMT. The predicate–argument structures are encoded via graph convolutional network (GCN) layers (Kipf and Welling, 2017), which are laid on top of regular BiRNN or CNN layers. Our work is in line with exploring semantic information, but different in exploiting AMR rather than SRL for NMT. In addition, we leverage a GRN (Song et al., 2018; Zhang et al., 2018) for modeling AMRs rather than GCN, which is formally consistent with the RNN sentence encoder. Since there is no one-to-one correspondence between AMR nodes and source words, we adopt a doubly attentive LSTM decoder, which is another major difference from Marcheggiani et al. (2018).
GRNs have recently been used to model graph structures in NLP tasks. In particular, Zhang et al. (2018) use a GRN model to represent raw sentences by building a graph structure of neighboring words and a sentence-level node, showing that the encoder outperforms BiLSTMs and Transformer (Vaswani et al., 2017) on classification and sequence labeling tasks; Song et al. (2018) build a GRN for encoding AMR graphs for text generation, showing that the representation is superior compared to BiLSTM on serialized AMR. We extend Song et al. (2018) by investigating the usefulness of AMR for neural machine translation. To our knowledge, we are the first to use GRN for machine translation.
In addition to GRNs and GCNs, there have been other graph neural networks, such as graph gated neural network (GGNN) (Li et al., 2015b; Beck et al., 2018). Because our main concern is to empirically investigate the effectiveness of AMR for NMT, we leave it to future work to compare GCN, GGNN, and GRN for our task.
3 Baseline: Attention-Based BiLSTM
3.1 BiLSTM Encoder
3.2 Attention-Based Decoder
4 Incorporating AMR
Figure 2 shows the overall architecture of our model, which adopts a BiLSTM (bottom left) and our graph recurrent network (GRN)2 (bottom right) for encoding the source sentence and AMR, respectively. An attention-based LSTM decoder is used to generate the output sequence in the target language, with attention models over both the sequential encoder and the graph encoder. The attention memory for the graph encoder is from the last step of the graph state transition process, which is shown in Figure 3.
4.1 Encoding AMR with GRN
A recurrent neural network is used to model the state transition process. In particular, the transition from gt−1 to gt consists of a hidden state tran sition for each node (such as from to ), as shown in Figure 3. At each state transition step t, our model conducts direct communication between a node and all nodes that are directly connected to the node. To avoid gradient diminishing or bursting, LSTM (Hochreiter and Schmidhuber, 1997) is adopted, where a cell is taken to record memory for . We use an input gate , an output gate , and a forget gate to control information flow from the inputs and to the output .
With this state transition mechanism, information of each node is propagated to all its neighboring nodes after each step. So after several transition steps, each node state contains the information of a large context, including its ancestors, descendants, and siblings. For the worst case where the input graph is a chain of nodes, the maximum number of steps necessary for information from one arbitrary node to reach another is equal to the size of the graph. We experiment with different numbers of transition steps to study the effectiveness of global encoding.
4.1.1 Input Representation
4.2 Incorporating AMR Information with a Doubly Attentive Decoder
5 Training
6 Experiments
We empirically investigate the effectiveness of AMR for English-to-German translation.
6.1 Setup
Data
We use the WMT163 English-to-German dataset, which contains around 4.5 million sentence pairs for training. In addition, we use a subset of the full dataset (News Commentary v11 [NC-v11], containing around 243,000 sentence pairs) for development and additional experiments. For all experiments, we use newstest2013 and newstest2016 as the development and test sets, respectively.
To preprocess the data, the tokenizer from Moses4 is used to tokenize both the English and German sides. The training sentence pairs where either side is longer than 50 words are filtered out after tokenization. To deal with rare and compound words, byte-pair encoding (BPE)5 (Sennrich et al., 2016) is applied to both sides. In particular, 8,000 and 16,000 BPE merges are used on the News Commentary v11 subset and the full training set, respectively. On the other hand, JAMR6 (Flanigan et al., 2016) is adopted to parse the English sentences into AMRs before BPE is applied. The statistics of the training data and vocabularies after preprocessing are shown in Tables 1 and 2, respectively. For the experiments with the full training set, we used the top 40K of the AMR vocabulary, which covers more than 99.6% of the training set.
Dataset . | EN-ori . | EN . | AMR . | DE . |
---|---|---|---|---|
NC-v11 | 79.8K | 8.4K | 36.6K | 8.3K |
Full | 874K | 19.3K | 403K | 19.1K |
Dataset . | EN-ori . | EN . | AMR . | DE . |
---|---|---|---|---|
NC-v11 | 79.8K | 8.4K | 36.6K | 8.3K |
Full | 874K | 19.3K | 403K | 19.1K |
For our dependency-based and SRL-based baselines (which will be introduced in Baseline Systems), we choose Stanford CoreNLP Manning et al. (2014) and IBM SIRE to generate dependency trees and semantic roles, respectively. Since both dependency trees and semantic roles are based on the original English sentences without BPE, we used the top 100K frequent English words, which cover roughly 99.0% of the training set.
Hyperparameters
We use the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.0005. The batch size is set to 128. Between layers, we apply dropout with a probability of 0.2. The best model is picked based on the cross-entropy loss on the development set. For model hyperparameters, we set the graph state transition number to 10 according to development experiments. Each node takes information from at most six neighbors. BLEU (Papineni et al., 2002), TER (Snover et al., 2006), and Meteor (Denkowski and Lavie, 2014) are used as the metrics on cased and tokenized results.
For experiments with the NC-v11 subset, both word embedding and hidden vector sizes are set to 500, and the models are trained for at most 30 epochs. For experiments with full training set, the word embedding and hidden state sizes are set to 800, and our models are trained for at most 10 epochs. For all systems, the word embeddings are randomly initialized and updated during training.
Baseline Systems
We compare our model with the following systems. Seq2seq represents our attention-based LSTM baseline (Section 3), and Dual2seq is our model, which takes both a sequential and a graph encoder and adopts a doubly attentive decoder (Section 4). To show the merit of AMR, we further contrast our model with the following baselines, all of which adopt the same doubly attentive framework with a BiLSTM for encoding BPE-segmented source sentences: Dual2seq-LinAMR uses another BiLSTM for encoding linearized AMRs. Dual2seq-Dep and Dual2seq-SRL adopt our graph recurrent network to encode original source sentences with dependency and semantic role annotations, respectively. The three baselines are useful for contrasting different methods of encoding AMRs and for comparing AMRs with other popular structural information for NMT.
We also compare with Transformer (Vaswani et al., 2017) and OpenNMT (Klein et al., 2017), trained on the same dataset and with the same set of hyperparameters as our systems. In particular, we compare with Transformer-tf, one popular implementation7 of Transformer based on TensorFlow, and we choose OpenNMT-tf, an official release8 of OpenNMT implemented with TensorFlow. For a fair comparison, OpenNMT-tf has one layer for both the encoder and the decoder, and Transformer-tf has the default configuration (N = 6), but with parameters being shared among different blocks.
6.2 Development Experiments
Figure 4 shows the system performances as a function of the number of graph state transitions on the development set. Dual2seq (self) represents our dual-attentive model, but its graph encoder encodes the source sentence, which is treated as a chain graph instead of an AMR graph. Compared with Dual2seq, Dual2seq (self) has the same number of parameters, but without semantic information from AMR. Due to hardware limitations, we do not perform an exhaustive search by evaluating every possible state transition number, but only transition numbers of 1, 5, 10, and 12.
Our Dual2seq shows consistent performance improvement by increasing the transition number both from 1 to 5 (roughly +1.3 BLEU points) and from 5 to 10 (roughly 0.2 BLEU points). The former shows greater improvement than the latter, showing that the performance starts to converge after five transition steps. Further increasing transition steps from 10 to 12 gives a slight performance drop. We set the number of state transition steps to 10 for all experiments according to these observations.
On the other hand, Dual2seq (self) shows only small improvements by increasing the state transition number, and it does not perform better than Seq2seq. Both results show that the performance gains of Dual2seq are not due to an increased number of parameters.
6.3 Main Results
Table 3 shows the test BLEU, TER, and Meteor scores of all systems trained on the small-scale News Commentary v11 subset or the large-scale full set. Dual2seq is consistently better than the other systems under all three metrics, showing the effectiveness of the semantic information provided by AMR. Especially, Dual2seq is better than both OpenNMT-tf and Transformer-tf. The recurrent graph state transition of Dual2seq is similar to Transformer in that it iteratively incorporates global information. The improvement of Dual2seq over Transformer-tf undoubtedly comes from the use of AMRs, which provide complementary information to the textual inputs of the source language.
System . | NC-v11 . | Full . | ||||
---|---|---|---|---|---|---|
. | BLEU . | TER↓ . | Meteor . | BLEU . | TER↓ . | Meteor . |
OpenNMT-tf | 15.1 | 0.6902 | 0.3040 | 24.3 | 0.5567 | 0.4225 |
Transformer-tf | 17.1 | 0.6647 | 0.3578 | 25.1 | 0.5537 | 0.4344 |
Seq2seq | 16.0 | 0.6695 | 0.3379 | 23.7 | 0.5590 | 0.4258 |
Dual2seq-LinAMR | 17.3 | 0.6530 | 0.3612 | 24.0 | 0.5643 | 0.4246 |
Duel2seq-SRL | 17.2 | 0.6591 | 0.3644 | 23.8 | 0.5626 | 0.4223 |
Dual2seq-Dep | 17.8 | 0.6516 | 0.3673 | 25.0 | 0.5538 | 0.4328 |
Dual2seq | 19.2* | 0.6305 | 0.3840 | 25.5* | 0.5480 | 0.4376 |
System . | NC-v11 . | Full . | ||||
---|---|---|---|---|---|---|
. | BLEU . | TER↓ . | Meteor . | BLEU . | TER↓ . | Meteor . |
OpenNMT-tf | 15.1 | 0.6902 | 0.3040 | 24.3 | 0.5567 | 0.4225 |
Transformer-tf | 17.1 | 0.6647 | 0.3578 | 25.1 | 0.5537 | 0.4344 |
Seq2seq | 16.0 | 0.6695 | 0.3379 | 23.7 | 0.5590 | 0.4258 |
Dual2seq-LinAMR | 17.3 | 0.6530 | 0.3612 | 24.0 | 0.5643 | 0.4246 |
Duel2seq-SRL | 17.2 | 0.6591 | 0.3644 | 23.8 | 0.5626 | 0.4223 |
Dual2seq-Dep | 17.8 | 0.6516 | 0.3673 | 25.0 | 0.5538 | 0.4328 |
Dual2seq | 19.2* | 0.6305 | 0.3840 | 25.5* | 0.5480 | 0.4376 |
In terms of BLEU score, Dual2seq is significantly better than Seq2seq in both settings, which shows the effectiveness of incorporating AMR information. In particular, the improvement is much larger under the small-scale setting (+3.2 BLEU) than that under the large-scale setting (+1.7 BLEU). This is an evidence that structural and coarse-grained semantic information encoded in AMRs can be more helpful when training data are limited.
When trained on the NC-v11 subset, the gap between Seq2seq and Dual2seq under Meteor (around 5 points) is greater than that under BLEU (around 3 points). Since Meteor gives partial credit to outputs that are synonyms to the reference or share identical stems, one possible explanation is that the structural information within AMRs helps to better translate the concepts from the source language, which may be synonyms or paronyms of reference words.
As shown in the second group of Table 3, we further compare our model with other methods of leveraging syntactic or semantic information. Dual2seq-LinAMR shows much worse performance than our model and only slightly outperforms the Seq2seq baseline. Both results show that simply taking advantage of the AMR concepts without their relations does not help very much. One reason may be that AMR concepts, such as John and Mary, also appear in the textual input, and thus are also encoded by the other (sequential) encoder.9 The gap between Dual2seq and Dual2seq-LinAMR comes from modeling the relations between concepts, which can be helpful for deciding target word order by enhancing the relations in source sentences. We conclude that properly encoding AMRs is necessary to make them useful.
Encoding dependency trees instead of AMRs, Dual2seq-Dep shows a larger performance gap with our model (17.8 vs 19.2) on small-scale training data than on large-scale training data (25.0 vs 25.5). It is likely because AMRs are more useful on alleviating data sparsity than dependency trees, since words are lemmatized into unified concepts when parsing sentences into AMRs. For modeling long-range dependencies, AMRs have one crucial advantage over dependency trees by modeling concept-concept relations more directly. It is because AMRs drop function words; thus the distances between concepts are generally closer in AMRs than in dependency trees. Finally, Dual2seq-SRL is less effective than our model, because the annotations labeled by SRL are a subset of AMRs.
We outperform Marcheggiani et al. (2018) on the same datasets, although our systems vary in a number of respects. When trained on the NC-v11 data, they show BLEU scores of 14.9 only with their BiLSTM baseline, 16.1 using additional dependency information, 15.6 using additional semantic roles, and 15.8 taking both as additional knowledge. Using Full as the training data, the scores become 23.3, 23.9, 24.5, and 24.9, respectively. In addition to the different semantic representation being used (AMR vs SRL), Marcheggiani et al. (2018) laid GCN (Kipf and Welling, 2017) layers on top of a bidirectional LSTM (BiLSTM) layer, and then concatenated layer outputs as the attention memory. GCN layers encode the semantic role information, while BiLSTM layers encode the input sentence in the source language, and the concatenated hidden states of both layers contain information from both semantic role and source sentence. For incorporating AMR, because there is no one-to-one word-to-node correspondence between a sentence and the corresponding AMR graph, we adopt separate attention models. Our BLEU scores are higher than theirs, but we cannot conclude that the advantage primarily comes from AMR.
6.4 Analysis
Influence of AMR Parsing Accuracy
To analyze the influence of AMR parsing on our model performance, we further evaluate on a test set where the gold AMRs for the English side are available. In particular, we choose The Little Prince corpus, which contains 1,562 sentences with gold AMR annotations.10 Since there are no parallel German sentences, we take a German-version The Little Prince novel, and then perform manual sentence alignment. Taking the whole The Little Prince corpus as the test set, we measure the influence of AMR parsing accuracy by evaluating on the test set when gold or automatically parsed AMRs are available. The automatic AMRs are generated by parsing the English sentences with JAMR.
Table 4 shows the BLEU scores of our Dual2seq model taking gold or automatic AMRs as inputs. Not listed in Table 4, Seq2seq achieves a BLEU score of 15.6, which is 1.2 BLEU points lower than using automatic AMR information. The improvement from automatic AMR to gold AMR (+0.7 BLEU) is significant, which shows that the translation quality of our model can be further improved with an increase of AMR parsing accuracy. However, the BLEU score with gold AMR does not indicate the potentially best performance that our model can achieve. The primary reason is that even though the test set is coupled with gold AMRs, the training set is not. Trained with automatic AMRs, our model can learn to selectively trust the AMR structure. An additional reason is the domain difference: The Little Prince data are in the literary domain while our training data are in the news domain. There can be a further performance gain if the accuracy of the automatic AMRs on the training set is improved.
Performance Based on Sentence Length
We hypothesize that AMRs should be more beneficial for longer sentences: Those are likely to contain long-distance dependencies (such as discourse information and predicate–argument structures), which may not be adequately captured by linear chain RNNs but are directly encoded in AMRs. To test this, we partition the test data into four buckets by length and calculate BLEU for each of them. Figure 5 shows the performances of our model along with Dual2seq-Dep and Seq2seq. Our model outperforms the Seq2seq baseline rather uniformly across all buckets, except for the first one, where they are roughly equal. This may be surprising. On the one hand, Seq2seq fails to capture some dependencies for medium-length instances; on the other hand, AMR parses are more noisy for longer sentences, which prevents us from obtaining extra improvements with AMRs.
Dependency trees have been proved useful in capturing long-range dependencies. Figure 5 shows that AMRs are comparatively better than dependency trees, especially on medium-length (21–30) sentences. The reason may be that the AMRs of medium-length sentences are much more accurate than longer sentences, and thus are better at capturing the relations between concepts. On the other hand, even though dependency trees are more accurate than AMRs, they still fail to represent relations for long sentences. It is likely because relations for longer sentences are more difficult to detect. Another possible reason is that dependency trees do not incorporate coreferences, which AMRs consider.
Human Evaluation
We further study the translation quality of predicate–argument structures by conducting a human evaluation on 100 instances from the test set. In the evaluation, translations of both Dual2seq and Seq2seq, together with the source English sentence, the German reference, and an AMR are provided to a German-speaking annotator to decide which translation better captures the predicate–argument structures in the source sentence. To avoid annotation bias, translation results of both models are swapped for some instances, and the German annotator does not know which model each translation belongs to. The annotator either selects a “winner” or makes a “tie” decision, meaning that both results are equally good.
Out of the 100 instances, Dual2seq wins on 46, Seq2seq wins on 23, and there is a tie on the remaining 31. Dual2seq wins on almost half of the instances, about twice as often as Seq2seq wins, indicating that AMRs help in translating the predicate–argument structures on the source side.
Case Study
The outputs of the baseline system (Seq2seq) and our final system (Dual2seq) are shown in Figure 6. In the first sentence, the AMR-based Dual2seq system correctly produces the reflexive pronoun sich as an argument of the verb trafen (meet), despite the distance between the words in the system output, and despite the fact that the equivalent English words each other do not appear in the system output. This is facilitated by the argument structure in the AMR analysis.
In the second sentence, the AMR-based Dual2seq system produces an overly literal translation for the English phrasal verb come across. The Seq2seq translation, however, incorrectly states that the police vehicles are refugees. The difficulty for the Seq2seq probably derives in part from the fact that are and coming are separated by the word constantly in the input, while the main predicate is clear in the AMR representation.
In the third sentence, the Dual2seq system correctly translates the object of breed as worms, while the Seq2seq translation incorrectly states that the scientists breed themselves. Here the difficulty is likely the distance between the object and the verb in the German output, which causes the Seq2seq system to lose track of the correct input position to translate.
7 Conclusion
We showed that AMRs can improve neural machine translation. In particular, the structural semantic information from AMRs can be complementary to the source textual input by introducing a higher level of information abstraction. A graph recurrent network (GRN) is leveraged to encode AMR graphs without breaking the original graph structure, and a sequential LSTM is used to encode the source input. The decoder is a doubly attentive LSTM, taking the encoding results of both the graph encoder and the sequential encoder as attention memories. Experiments on a standard benchmark showed that AMRs are helpful regardless of the sentence length and are more effective than other more popular choices, such as dependency trees and semantic roles.
Acknowledgments
We would like to thank the action editor and the anonymous reviewers for their insightful comments. We also thank Kai Song from Alibaba for suggestions on large-scale training, Parker Riley for comments on the draft, and Rochester’s CIRC for computational resources.
Notes
That is, flipping arguments corresponding to different roles.
We show the advantage of our graph encoder by comparing with another popular method for encoding AMRs in Section 6.3.
AMRs can contain multi-word concepts, such as New York City, but they are in the textual input.