Abstract
Word reordering is one of the most difficult aspects of statistical machine translation (SMT), and an important factor of its quality and efficiency. Despite the vast amount of research published to date, the interest of the community in this problem has not decreased, and no single method appears to be strongly dominant across language pairs. Instead, the choice of the optimal approach for a new translation task still seems to be mostly driven by empirical trials.
To orient the reader in this vast and complex research area, we present a comprehensive survey of word reordering viewed as a statistical modeling challenge and as a natural language phenomenon. The survey describes in detail how word reordering is modeled within different string-based and tree-based SMT frameworks and as a stand-alone task, including systematic overviews of the literature in advanced reordering modeling.
We then question why some approaches are more successful than others in different language pairs. We argue that besides measuring the amount of reordering, it is important to understand which kinds of reordering occur in a given language pair. To this end, we conduct a qualitative analysis of word reordering phenomena in a diverse sample of language pairs, based on a large collection of linguistic knowledge. Empirical results in the SMT literature are shown to support the hypothesis that a few linguistic facts can be very useful to anticipate the reordering characteristics of a language pair and to select the SMT framework that best suits them.
1. Introduction
Statistical machine translation (SMT) is a data-driven approach to the translation of text from one natural language into another. It emerged in the 1990s and matured in the 2000s to become widespread today; the core SMT methods (Brown et al. 1990, 1993; Berger et al. 1996; Koehn, Och, and Marcu 2003) learn direct correspondences between source and target language from collections of translated sentences, without the need for abstract linguistic representations. The main advantages of SMT are versatility and cost-effectiveness: In principle, the same modeling framework can be applied to any pair of languages with minimal engineering effort, given sufficient amounts of translation data. However, experience in a diverse range of language pairs has revealed that this form of modeling is highly sensitive to structural differences between source and target language, particularly at the level of word order.
Indeed, natural languages vary greatly in how they arrange sentence components, and translating words in the correct order is essential to preserving meaning across languages. In English, for instance, the role of different predicate arguments is determined precisely by their relative position within the sentence. Consider the translation example in Figure 1: Looking at the English glosses of the Arabic sentence, one can see that corresponding words in the two languages are placed in overall similar orders with the notable exception of the verb (jdd/renewed), which occurs at the beginning of the Arabic sentence but in the middle of the English one—more specifically, between the subject and the object. To reach the correct English order, three other reorderings are required between pairs of adjacent Arabic words: (AlEAhl/the-monarch, Almgrby/the-Moroccan), (dEm/support, -h/his), and (Alr}ys/the-president, Alfrnsy/the-French). This example suggests a simple division of reordering patterns into long range, or global, and short range, or local. However, other language pairs display more complex, hierarchical patterns.
Word reordering phenomena are naturally handled by human translators1 but are a major source of complexity for SMT. In very general terms, the task of SMT consists of breaking the input sentence into smaller units, selecting an optimal translation for each unit, and placing them in the correct order. Searching for the overall best translation throughout the space of all possible reorderings is, however, computationally intractable (Knight 1999). This crucial fact has motivated an impressive amount of research around two inter-related questions: namely, how to effectively restrict the set of allowed word permutations and how to detect the best permutation among them.
Existing solutions to these problems range from heuristic constraints, based on word-to-word distances and completely agnostic about the sentence content, to linguistically motivated SMT frameworks where the entire translation process is guided by syntactic structure. The research in word reordering has advanced together with core SMT research and has sometimes directed it, being one of the main motivations for the development of tree-based SMT. At the same time, the variety of word orders existing in world languages has pressed the SMT community to admit the importance of language-specific knowledge and to reassess its ambitions towards a universal translation algorithm.
According to the Machine Translation Archive, a scientific interest in this specific subproblem of MT started around 2006 and kept growing at a rapid pace. In 2014, the research papers mainly dedicated to reordering accounted for no less than 10% of all SMT papers.2 Despite the abundant research, word order differences remain among the most important factors of performance in modern SMT systems, and new approaches to reordering are still proposed every year.
To orient the reader in this complex and productive research area, we present a comprehensive survey of word reordering viewed as a statistical modeling challenge and as a natural language phenomenon. Our survey notably differs from previous work (Costa-jussà and Fonollosa 2009) in that we not only review the existing approaches to word reordering in SMT, but we also question why some approaches are more successful than others in different language pairs. In particular, we argue that understanding the complexity of reordering in a given language pair is key to selecting the right SMT models and to improving them.
The survey is organized as follows: Section 2 explains how the word reordering problem is treated within different string-based and tree-based SMT frameworks, as well as a stand-alone task (i.e., pre-and post-ordering). The literature in advanced reordering modeling is extensively reviewed, with a major focus on recent work. Section 3 describes the challenges of automatically assessing word reordering accuracy in SMT outputs. Section 4 presents a qualitative analysis of word reordering across language pairs. In particular, detailed word order profiles are provided for a sample of seven widely spoken languages representing structural and geographical diversity: namely, English, German, French, Arabic, Turkish, Japanese, and Chinese. The same section reviews empirical results from the SMT literature, showing that the proposed word order profiles are useful to anticipate the reordering characteristics of a language pair and to select the SMT framework that best suits them. The survey ends with a discussion of the strengths and weaknesses of the major approaches to reordering in SMT.
2. Approaches to Word Reordering in Statistical Machine Translation
A first important distinction has to be made between word reordering performed as part of the decoding process (Sections 2.1 to 2.3) and word reordering performed before or after it as a monolingual task decoupled from the bilingual translation task (Section 2.4).
Within the former, we further distinguish between string-based (sequential) approaches and tree-based (structural) approaches. String-based SMT (Sections 2.1 and 2.2) treats translation as a sequential task: The target sentence is built from left to right while the input units are visited in different orders and no dependencies other than word adjacency are considered. Subsequently, problem decomposition is applied to the target string: an optimal translation is sought for each prefix of the target translation, from the shortest to the longest. Tree-based SMT (Section 2.3) posits the existence of a tree structure to explain translation as a hierarchical process and to capture dependencies among non-adjacent text units. Problem decomposition is therefore based on this structure: An optimal translation is sought for each word span corresponding to a node in the tree, from the leaves up to the root. Whereas string-based SMT has to search over all input permutations that do not violate some general reordering constraints, tree-based SMT considers only those permutations that result from transforming a given tree representing the input sentence (as for example permuting each node's children).
Moreover, we should note the difference between syntax-based SMT approaches that utilize trees produced by monolingual parsers trained on syntactic treebanks and data-driven tree-based SMT approaches that extract bilingual translation grammars directly from pairs of source and target sentences. In the former, word reordering is constrained by a given syntactic parse tree of the input sentence or by the grammar of the target language (or both), whereas in the latter, tree structure captures hierarchical reordering patterns that may or may not correspond to syntactically motivated rules.
2.1 Phrase-Based SMT
Phrase-based SMT (PSMT) is the currently dominant approach in string-based SMT. PSMT ruled out the early word-based SMT framework (Brown et al. 1990, 1993; Berger et al. 1996) thanks to two important novelties: the use of multi-word translation units (Och 1999; Zens, Och, and Ney 2002; Koehn, Och, and Marcu 2003), and the move from a generative to a discriminative modeling framework (Och and Ney 2002).
The use of phrases mainly results in a better handling of ambiguous words and many-to-many word equivalences, but it also makes it possible to capture a considerable amount of local reordering phenomena within a translation unit (intra-phrase reordering). With reference to our running example (Figure 1), a PSMT model may handle the local reorderings as single phrase pairs—[AlEahl Almgrby]-[The Moroccan monarch], and so forth—if these were observed in the training data. On the contrary, it is unlikely that a single long phrase spanning from jdd to AlsAds was observed, therefore the long-range reordering of the verb could be handled by inter-phrase reordering.
State-of-the art PSMT systems typically include the following core feature functions: phrase-and word-level translation models; target n-gram language model; distortion penalty; plus additional components that model specific translation aspects. Assuming a one-to-one correspondence between source and target phrases, reordering in PSMT means searching through a set of permutations of the source phrases. Thus, two sub-problems arise: defining the set of permutations in b allowed during decoding (reordering constraints) and scoring the allowed permutations (reordering models or feature functions). We will now discuss each of them in detail.
2.1.1 PSMT Reordering Constraints
Because searching over the space of all possible translations is NP-hard (Knight 1999), SMT decoders use heuristic search algorithms to only explore a promising subset of the search space. In particular, limiting the set of explorable input permutations is an essential way to reduce decoding complexity.
A different kind of reordering constraint can be derived from the Inversion Transduction Grammars (ITGs) (Wu 1995, 1997). ITG constraints only admit permutations that are generated by recursively swapping pairs of adjacent blocks of words.4 In particular, ITG constraints disallow reorderings that generalize the patterns (3 1 4 2) and (2 4 1 3), which are rarely attested in natural languages (Wu 1997).5 Enforcing ITG constraints in left-to-right PSMT decoding requires the use of a shift-reduce permutation parser (Zens 2008; Feng et al. 2010). Alternatively, a relaxed version of the ITG constraints (i.e., Baxter permutations) may be enforced by simply inspecting the set of covered source positions, as proposed by Zens et al. (2004) and Zens (2008). Interestingly, Cherry, Moore, and Quirk (2012) found no consistent benefit from applying either exact or approximate ITG-constraints to a PSMT system that already included a hierarchical phrase orientation model6 (Galley and Manning 2008).
The reordering constraints presented so far are not sensitive to the words being translated nor to their context. This results in a very coarse definition of the reordering search space, which is problematic in language pairs with different syntactic structures. To address this problem, Yahyaei and Monz (2009) propose decoupling local and global reordering by segmenting the input sentence into chunks that can be permuted arbitrarily, but each of which is translated monotonically. In related work, Yahyaei and Monz (2010) present a technique to dynamically set the DL during decoding: They train a discriminative classifier to predict the most probable jump length after each input word, and use the predicted value as the DL after that position. Unfortunately, this method appears to generate inconsistent constraints leading to decoding dead-ends. Bisazza and Federico (2013a) further develop this idea so that only long reorderings predicted by a specific reordering model are explored by the decoder. This form of early reordering pruning enables the PSMT system to capture long-range reordering without hurting efficiency and is not affected by the constraint inconsistency problem.
When available, a parse tree of the input may also be used to constrain PSMT reordering, following the principle of syntactic cohesion (Fox 2002). Concretely, the dependency cohesion constraint (Cherry 2008) states that when part of a source sub-tree is translated, all words under the same subtree must be covered before moving to words outside of it. Integrated in phrase-based decoding as soft constraints (i. e., by using the number of violations as a feature function), dependency cohesion and its variants (Cherry 2008; Bach, Vogel, and Cherry 2009) were shown to significantly improve translation quality. In related work, Feng, Sun, and Ney (2012) derive similar cohesion constraints from the semantic role labeling structure of the input sentence. The divide-and-translate approach of Sudoh et al. (2010) uses source-side parse trees to segment complex sentences into simple clauses which are replaced by specific symbols and translated independently. Then, the target sentence is reconstructed using the placeholders, with the aim of simplifying long-range clause-level reordering.
2.1.2 PSMT Reordering Feature Functions
Target language modeling is the primary way to reward promising reorderings during translation. This happens indirectly, through the scoring of target word n-grams, which are generated by translating the source positions in different orders. However, the fixed-size context of language models used in SMT (typically four or five words) makes them largely insensitive to global reordering phenomena. In recent years, a growing interest in language pairs with very different word orders, such as Arabic–English and Chinese–English, has favored the development of new techniques to explicitly model the reordering problem. Given a source sentence, the search for its optimal reordering is generally decomposed into a sequence of local reordering decisions, as is done for the whole translation process. Thus, the basic reordering step corresponds to the relative positioning of the word or phrase being translated, with respect to the word or phrase that was previously translated.
The simplest example of reordering feature function is the distortion cost or distortion penalty jump(Ji−1, Ji), which by convention assigns zero cost to hypotheses that preserve the order of the source phrases (monotonic translations). During decoding, the basic implementation of distortion cost penalizes long jumps only when they are performed, leading to the proliferation of hypotheses with gaps (i.e., uncovered input positions). This issue can be addressed by incorporating into the distortion cost an estimate of the cost yet to be incurred (Moore and Quirk 2007).
State-of-the-art systems use the distortion cost in combination with more sophisticated reordering models that take into account the identity of the reordered phrases and, optionally, various kinds of contextual information. A representative selection of such models is summarized in Table 1. To ease the presentation, we have divided the models into four groups according to their problem formulation: phrase orientation models, jump models, source decoding sequence models, and operation sequence models.
Reordering models . | References . | Model type . | Reordering step classification . | Features . |
---|---|---|---|---|
Phrase orientation models (POM): Example: P(orient=discontinuous-left | next-phrase-pair=[jdd]-[renewed]) | ||||
lexicalized (hierarchical) phrase orientation model | Tillmann 2004; Koehn et al. 2005; Nagata et al. 2006; Galley & Manning 2008 | gener. | monotonic, swap, discontinuous (left or right) | source/target phrases |
phrase orientation maxent classifier | Zens & Ney 2006 | discr. | source/target words or word clusters | |
sparse phrase orientation features | Cherry 2013 | discr. | ||
Jump models (JM): Example: P(jump=−5 | from=AlsAds, to=jdd) | ||||
inbound/outbound/pairwise lexicalized distortion | Al-Onaizan & Papineni 2006 | gener. | jump length | source words |
inbound/outbound length-bin classifier | Green et al. 2010 | discr. | jump length based (9 length bins) | source words, POS, position; sent. length |
Source decoding sequence models (SDSM): Example: P(next-word=jdd | prev-translated-words=AlEahil Almlk mHmd AlsAds) | ||||
reordered source n-gram | Feng et al. 2010 | gener. | — | source words (9-gram context) |
source word-after-word | Bisazza & Federico 2013; Goto et al. 2013 | discr. | — | source words, POS; source context's words and POS |
Operation sequence models (OSM): Example: P(next-operation=generate[jdd,renewed] | prev-operations=generate[AlsAds,VI] jumpBack[1]) | ||||
translation/reordering operation n-gram | Durrani et al. 2011; Durrani et al. 2013; Durrani et al. 2014 | gener. | insertGap, jumpBack, jumpForward | source/target words, POS or word clusters; prev. n –1 operations |
Reordering models . | References . | Model type . | Reordering step classification . | Features . |
---|---|---|---|---|
Phrase orientation models (POM): Example: P(orient=discontinuous-left | next-phrase-pair=[jdd]-[renewed]) | ||||
lexicalized (hierarchical) phrase orientation model | Tillmann 2004; Koehn et al. 2005; Nagata et al. 2006; Galley & Manning 2008 | gener. | monotonic, swap, discontinuous (left or right) | source/target phrases |
phrase orientation maxent classifier | Zens & Ney 2006 | discr. | source/target words or word clusters | |
sparse phrase orientation features | Cherry 2013 | discr. | ||
Jump models (JM): Example: P(jump=−5 | from=AlsAds, to=jdd) | ||||
inbound/outbound/pairwise lexicalized distortion | Al-Onaizan & Papineni 2006 | gener. | jump length | source words |
inbound/outbound length-bin classifier | Green et al. 2010 | discr. | jump length based (9 length bins) | source words, POS, position; sent. length |
Source decoding sequence models (SDSM): Example: P(next-word=jdd | prev-translated-words=AlEahil Almlk mHmd AlsAds) | ||||
reordered source n-gram | Feng et al. 2010 | gener. | — | source words (9-gram context) |
source word-after-word | Bisazza & Federico 2013; Goto et al. 2013 | discr. | — | source words, POS; source context's words and POS |
Operation sequence models (OSM): Example: P(next-operation=generate[jdd,renewed] | prev-operations=generate[AlsAds,VI] jumpBack[1]) | ||||
translation/reordering operation n-gram | Durrani et al. 2011; Durrani et al. 2013; Durrani et al. 2014 | gener. | insertGap, jumpBack, jumpForward | source/target words, POS or word clusters; prev. n –1 operations |
Phrase orientation models (POM) (Tillmann 2004; Koehn et al. 2005; Nagata et al. 2006; Zens and Ney 2006; Li et al. 2014), simply known as lexicalized reordering models, predict whether the next translated source span should be immediately to the right (monotone), immediately to the left (swap), or anywhere else (discontinuous) relatively to the last translated one.7 For example, in Figure 2, the phrase pair [Almlk mHmd AlsAds]-[King Mohamed VI] has monotone orientation whereas [jdd]-[renewed] has discontinuous left orientation with respect to the previously translated phrase. Because of their simple reordering step classification, POM can be conditioned on very fine-grained information, such as the whole phrase pair, without suffering too much from data sparseness. However, because POM ignore the distance between consecutively translated phrases, they cannot properly handle long-range reordering phenomena and are typically used with a low distortion limit.
Jump models (JM) (Al-Onaizan and Papineni 2006; Green, Galley, and Manning 2010) predict the direction and length of the jump that is performed between consecutively translated words or phrases, with the goal of better handling long-range reordering. Because of data sparseness, JM work best when trained in a discriminative fashion using a variety of binary features (such as the last translated word, its POS tag, and relative position in the sentence) and when length bins are used instead of the exact jump length (Green, Galley, and Manning 2010). A drawback of JM is that they typically over-penalize long jumps because they are more rarely observed than short jumps.
Source decoding sequence models (SDSM) address this issue by directly modeling the reordered sequence of input words, as opposed to the reordering operations that generated it. This in turn can be done in several ways, such as: training n-gram models on target-like reordered source sentences and using them to score the sequence of input words visited by the decoder (Feng, Mauser, and Ney 2010); tagging the whole input sentence with symbols denoting how each word should be reordered with respect to its left and right context, then rewarding the decoding paths that most agree with the tag sequence (Feng, Peter, and Ney 2013); and finally, predicting which input position is likely to be translated right after a given input position by means of a maximum entropy model using word and context features (Bisazza and Federico 2013a; Goto et al. 2013).
Operation sequence models (OSM) (Durrani, Schmid, and Fraser 2011) are n-gram models that include lexical translation operations and reordering operations (insertGap, jumpBack, or jumpForward ) in a single generative story, thereby combining elements from the previous three model families. An operation sequence example is provided in the lower part of Table 1. OSM are closely related to n-gram based SMT models (see next section) but have been successfully applied as feature functions to PSMT (Durrani et al. 2013). To overcome data sparseness, OSM can be successfully applied to POS-tags and unsupervised word clusters (Durrani et al. 2014).
SDSM and OSM have been proven optimal for language pairs where high distortion limits are required to capture long-range reordering phenomena (Durrani, Schmid, and Fraser 2011; Bisazza and Federico 2013b; Goto et al. 2013). Nevertheless, POM remains the most widely used type of phrase-based reordering model and is considered a necessary component of PSMT baselines in any language pair. In particular, two variants of POM deserve further attention because of their notable effect on translation quality: hierarchical POM and sparse phrase orientation features.
Hierarchical phrase orientation models, or simply hierarchical reordering models (HRM) (Galley and Manning 2008), improve the way in which the orientation of a new phrase pair is determined: Already translated adjacent blocks are merged together to form longer phrases around the current one. For instance in Figure 3, HRM merges phrases 1 and 2 into a large phrase pair [AlEahl … AlsAds]-[The … VI] and consequently assigns a swap, instead of discontinuous orientation, to [jdd]-[renewed]. As a result, orientation assignments become more consistent across hypotheses with different phrase segmentations.
Rather than training a reordering model by relative frequency or maximum entropy and using its score as one dense feature function, Cherry (2013) introduces sparse phrase orientation features that are directly added to the model score during decoding (cf. Equation (1)) and optimized jointly with all other SMT feature weights. Effective sparse reordering features can be obtained by simply coupling a phrase pair's orientation with the first or last word (or word class) of its source and target side (Cherry 2013), or even with the whole phrase pair identity (Auli, Galley, and Gao 2014).
2.2 n-gram Based SMT
n-gram based SMT (Casacuberta and Vidal 2004; Mariño et al. 2006) is a string-based alternative to PSMT. In this framework, smoothed n-gram models are learned over sequences of minimal translation units (called tuples), which, like phrase pairs, are pairs of word sequences extracted from word-aligned parallel sentences. Tuples, however, are typically shorter than phrase pairs and are extracted from a unique, monotonic segmentation of the sentence pair. Thus, the problem of spurious phrase segmentation is avoided but non-local reordering becomes an issue. For instance, in Figure 2, a monotonic phrase segmentation could be achieved only by treating the large block [jdd … AlsAds][The … renewed] as a single tuple. Reordering is then addressed by “tuple unfolding” (Crego, Mariño, and de Gispert 2005): that is, during training the source words of each translation unit are rearranged in a target-like order so that more, shorter tuples can be extracted. At test time, input sentences have to be pre-ordered for translation. To this end, Crego and Marino (2006) propose to precompute a number of likely permutations of the input using POS-based rewrite rules learned during tuple unfolding. The reorderings thus obtained are used to extend the search graph of a monotonic decoder.8 Reordering is often considered as a shortcoming of n-gram–based SMT as reordering decisions are largely decoupled from decoding and mostly based on source-side information.
2.3 Tree-Based SMT
The SMT frameworks discussed so far learn direct correspondences between source and target words or phrases, treating reordering as a sequential process. This flat representation is fairly successful for some language pairs, although in others, reordering is more naturally described as a hierarchical process where small, locally reordered blocks become the elements of recursively larger reordered blocks. Concretely, in our running example (Figure 2), a hierarchical or tree-based approach would make it possible to first translate and reorder small blocks such as [AlEahl Almgrby] and [Almlk mHmd AlsAds], then merge them to compose a larger block that gets reordered as a whole with respect to the verb jdd, and so forth. The degree of generalization at each level would then depend on how blocks are represented (e.g., by their lexical content, by a tag denoting the block's syntactic category, or by a generic symbol).
Tree-based approaches are largely inspired by syntactic parsing, but not all in the same way: Some model translation as the transformation of trees produced by monolingual parsers trained on syntactic treebanks (Section 2.3.1), whereas others extract a bilingual translation grammar directly from word-aligned parallel text without using any syntactic information (Section 2.3.2). Non-syntactic bilingual translation grammars may still be enriched with syntactic information—for instance, in the form of soft constraints (Section 2.3.3).
All tree-based frameworks crucially differ from PSMT and other string-based frameworks with respect to reordering: Whereas PSMT considers all input permutations that do not violate general reordering constraints and then scores them with separate reordering models, tree-based systems model reordering jointly with translation and, during decoding, only (or mostly) explore input permutations that are licensed by the learned translation model.
Most modern tree-based approaches fall under the general formulation of SMT, which scores translation hypotheses by a linear combination of feature functions (see Equation (1)), with a translation model (or grammar) and a target language model as core features. Tree-based decoding is usually performed by a chart-parsing algorithm with beam search and integrated target language model. Hence, the target sentence is not produced from left to right as in string-based SMT, but bottom–up according to a tree derivation order.
2.3.1 Syntax-Based SMT
An important motivation for using syntax in SMT is that reordering among natural languages very often involves the permutation of whole syntactic constituents (e.g., Fox 2002). For instance, in our running example (Figure 2), knowing the span of the Arabic subject would be enough to predict the reordering of the verb for translation into English.
Syntax-based SMT encompasses a variety of frameworks that use syntactic annotation either on the source or on the target language, or both. So-called tree-to-string methods (Huang, Knight, and Joshi 2006; Liu, Liu, and Lin 2006) use a given input sentence parse tree to restrict the application of translation/reordering rules to word spans that coincide with syntactic constituents of specific categories. For instance, the swap of Alr}ys Alfrnsy may only be dictated by a rule applying to noun phrases composed of a noun and an adjective. On the other hand, string-to-tree methods (Yamada and Knight 2002; Galley et al. 2004; Marcu et al. 2006; Shen, Xu, and Weischedel 2010) use syntax as a way to restrict translation hypotheses to well-formed target language sentences—ruling out, for instance, a translation that fails to reorder the translated verb renewed with respect to its subject. Using syntax on both source and target sides (tree-to-tree) (Imamura, Okuma, and Sumita 2005; Ding and Palmer 2005; Smith and Eisner 2006; Watanabe, Tsukada, and Isozaki 2006; Zhang et al. 2008) has proven rather difficult in practice due to the complexity of aligning potentially very different tree topologies and to the large size of the resulting translation grammars. Moreover, the need for high-quality parsers in both language sides seriously limits the applicability of this approach.
Syntax-based SMT approaches also differ in the formalism they use to represent the trees. Those based on phrase structure (constituency) grammars typically comply with the principle that each translation/reordering rule should match a complete constituent, whereas those based on dependency grammars opt for a more flexible use of structure. For example, in string-to-dependency SMT (Shen, Xu, and Weischedel 2010) rules can correspond to partial constituents but must be either a single rooted tree, with each child being a complete sub-tree, or a sequence of siblings, each being a complete sub-tree. Partial dependency rules are then combined during decoding, which means that not all reordering decisions are governed by the translation model.
An even more flexible use of structure is advocated by the treelet-based SMT framework (Quirk, Menezes, and Cherry 2005), where translation rules can correspond to any connected subgraph of the dependency tree (i.e., treelet). As illustrated by Figure 4, treelet pairs are extracted from pairs of source dependency parse tree and target-side projected trees. Treelets can be seen as phrases that are not limited to sets of adjacent words, but rather to sets of words that are connected by dependency relations, which in turn make it possible to learn non-local reordering patterns. As reordering decisions are only partially governed by the translation model, treelet-based SMT benefits from additional model components specifically dedicated to reordering. For example, in Figure 4, treelet pair (3) determines the swapping of jdd and AlEAhl but does not specify the ordering of dEm, which is also a child of jdd. Hence, during decoding, all possible reorderings of the unmatched children are considered and scored by a separate discriminative model, predicting the position of a child node (or modifier m) relative to its head h, given lexical, POS, and positional features of m and h. Reordering modeling is thus largely decoupled from lexical selection, which makes the model very flexible but results in a very large search space and high risk of search errors. To address this issue, Menezes and Quirk (2007) introduce another mechanism to complement treelet reordering: namely, dependency order templates. An order template is an unlexicalized rule specifying the reordering of a node and all its children based on their POS tags. For instance, in Figure 4, treelet pair (3) may be combined with template (a) to specify the order of the child dEm. For each new test sentence, matching treelet pairs and order templates are combined to construct lexicalized translation rules for that sentence and, finally, decoding is performed with a chart parsing algorithm.
We will now discuss SMT frameworks that model translation as a process of parallel parsing of the source and target language via a synchronous grammar.
2.3.2 Tree-Based SMT Without Syntax
The idea of extracting bilingual translation (i.e., synchronous) grammars directly from word-aligned parallel data originates in early work on ITG by Wu (1996, 1997).
In a more mature approach, hierarchical phrase-based SMT (HSMT) (Chiang 2005), the translation model is a probabilistic synchronous context-free grammar (SCFG) whose rules can correspond to arbitrary (i.e., nonsyntactically motivated) phrases labeled by only two generic non-terminal symbols (X or S). As shown in Figure 5, HSMT translation rules can either include a mix of terminals and non-terminals capturing reordering patterns and discontinuities (rules 1–4), or only terminals (rules 7–10) basically corresponding to phrase pairs in string-based PSMT. Finally, the so-called glue rules (5–6) are always added to the grammar to combine translated blocks in a monotone fashion regardless of their content. As in PSMT, extracted translation rules may not exceed a certain length and rule scores are obtained using maximum likelihood estimation. Crucially, swapping adjacent phrases with no lexical evidence (X → X1X2||X2X1) is not allowed by standard HSMT grammars; therefore reordering can only be triggered by at least partially lexicalized translation rules. This is a major difference with respect to most syntax-based approaches, where reordering can be captured by rules containing only labeled non-terminals (e.g., s → np vp || vp np). This means that, for instance, the reordering pattern learned by our example HSMT grammar (Figure 5, rule 1) may only be used to reorder the specific verb form jdd (renewed) in subsequent test sentences. Thus, HSMT is likely to work better for languages where the syntactic role of phrases is mostly expressed by separate function words (e.g., Chinese) than for languages where this information is largely conveyed by word inflection (e.g., Russian).
Although hierarchical models are inherently capable of dealing with complex and recursive reordering patterns, in practice many translation rules are noisy or based on limited context. To limit search complexity, a constraint is imposed on the maximum number of source words that may be covered by a non-terminal symbol during decoding (span constraint). This parameter is typically set to 10 or 15 words, as wider spans result in prohibitively slow decoding and lower translation quality. For these reasons, a number of extensions to the original HSMT framework have been proposed with the specific goal of better handling complex reordering phenomena.
Shallow-n grammars (de Gispert et al. 2010) can be used to refine the reordering space of HSMT according to the reordering characteristics of a specific language pair. For instance, as shown in Figure 6, an Arabic–English HSMT grammar is extended with an additional non-terminal symbol X0 that can only generate fully lexicalized phrases, thereby disallowing recursive nesting of hierarchical rules (shallow-1 grammar). To account for the movement of large word blocks, other new non-terminals Mk allow for the monotonic generation of k non-terminals X0. While defining a much smaller search space than the original HSMT grammar, the resulting shallow grammar can capture the long-range reordering of our running example even in the likely absence of a rule covering the whole subject span (i.e., in rule 2 in Figure 5).
In related work specifically addressing the issue of long-range reordering, Braune, Gojun, and Fraser (2012) propose relaxing the span constraint only for specific types of hierarchical rules that are more likely to capture long, reordering patterns in German–English. For instance, rules whose source side starts with at least one terminal followed by one non-terminal and ends with at least one terminal (t+ X t+) can capture the pattern ‘finite-auxiliary-verb X participle’ (e.g., ist X gestiegen/has increased X) with very wide X spans.
Mylonakis and Sima'an (2010) separate the modeling of local reordering (captured by fully lexicalized phrase-pair emission rules) from the modeling of higher-order recursive reordering (captured by ITG-style non-lexicalized binary rules). Instead of a single non-terminal X, three different reordering-based labels are used, according to the reordering pattern in which they participate: X for monotonic rules; XSL and XSR for the first and second symbol, respectively, of swapping rules. Thus reordering decisions are conditioned on the phrase pair's content, rather than its lexical context as in HSMT. More fine-grained non-terminals are introduced by Maillette de Buy Wenniger and Sima'an (2014) to also capture the relation of a phrase pair's reordering with respect to the parent phrase that contains it.
Rather than relabeling non-terminals, other work incorporates reordering-specific models as additional feature functions. He, Meng, and Yu (2010) add to their HSMT grammar the generic phrase swapping rule (X → X1X2||X2X1) and use a maximum-entropy classifier to predict whether two neighboring phrases should be swapped or not during decoding. Rather than conditioning the decision on the whole phrase pair, the classifier uses features extracted from it, such as first and last word (or POS tag) of the source and target side. A similar model was first developed by Xiong, Liu, and Lin (2006) for simpler phrase translation models (i.e., without discontinuities) based on ITG. Li, Liu, and Sun (2013) use recursive autoencoders (Socher et al. 2011) to assign vector representations to the neighboring phrases given as input to the ITG classifier, thereby avoiding manual feature engineering but affecting hypothesis recombination and decoding speed. Nguyen and Vogel (2013) and Huck et al. (2013) successfully integrate the distortion cost feature function and phrase orientation models initially designed for string-based PSMT into a chart-based HSMT decoder.
Finally, Setiawan, Kan, and Li (2007) observe that, in languages like Chinese and English, function words provide important clues on the grammatical relationships among phrases. Consequently, they introduce a SCFG where function words (approximated by high-frequency words) are the only lexicalized non-terminals guiding phrase reordering. Based on the same intuition, Setiawan et al. (2009) augment a HSMT system with a function-word ordering model that predicts, for any pair of translation rules, which one should dominate the other in the hierarchical structure, based on the function words that they contain.9
2.3.3 Tree-Based SMT with Soft Syntactic Constraints
We have discussed SMT frameworks where the translation model is fully based on the syntactic parse tree of the source or target sentence (Section 2.3.1) or where syntax is not used at all (Section 2.3.2). A third line of work bridges between these two by exploiting syntactic information in the form of soft constraints while operating with a synchronous translation grammar extracted from non-parsed parallel data.
Chiang (2005) first experimented with a feature function rewarding translation rules applied to full syntactic constituents (constituent feature). Although this initial attempt did not appear to improve translation quality, Marton and Resnik (2008) further elaborated the idea and proposed a series of finer-grained features distinguishing among constituent types (vp, np, etc.), eventually leading to better performance. Gao, Koehn, and Birch (2011) extract two reordering-related feature functions from source dependency parse trees: (i) The dependency orientation model predicts whether the relative order of a source word and its head should be reversed during translation. This is trained as a maximum-entropy classifier using the words and their dependency relation type as features. (ii) The dependency cohesion penalty fires whenever a word and its head are translated separately (i. e., by different translation rules), thereby measuring derivation well-formedness. Because long-range reordering tends to happen closer to the root and local reordering closer to the leaves, a distinction is made between words occurring at different depths of the dependency tree leading to a number of sub-features. In this way, the tuning process can decide how important or reliable feature scores coming from different levels of the parse tree are. Huang, Devlin, and Zbib (2013) worked instead with constituency parses and trained a classifier to predict whether the order of any two sibling constituents in the input tree should be reversed or maintained during translation. The classifier is trained by maximum entropy, using a number of syntactic features and used during decoding at the word level: that is, each pair of input words inherit the orientation probabilities of the constituents that cover them, respectively.
Syntactic annotation has also been used to refine non-terminal SCFG labels, potentially leading to better reordering choices. In Zollmann and Venugopal (2006) and Mylonakis and Sima'an (2011), labels indicate whether a phrase corresponds to a syntactic constituent or to part of it, as well as the constituent type, relatively to a target or source parse tree, respectively. Moreover, Mylonakis and Sima'an treat the phrase-pair category as a latent variable and let their system learn reordering distributions over multiple labels per span (generic X or source-syntax based like np, vbz+dt, etc.). Li et al. (2012) use source dependency annotation to refine non-terminal symbols with syntactic head information. More specifically, given a hierarchical phrase, its type is obtained by concatenating the POS tags of the exposed heads it contains on the source side, where an exposed head is a word dominated by a word outside the phrase. Like He, Meng, and Yu (2010), Li et al. (2012) also allow adjacent phrases to swap, but instead of introducing a separate orientation model, they rely on rule translation probabilities based on the refined non-terminals to guide reordering.
2.4 Word Reordering as Pre-(or Post-) Processing
Given the complexity of solving word reordering during the decoding process, a productive line of research has focused on decoupling reordering decisions from translation decisions. These approaches aim at arranging words in a target-like order either on the input, before translating, or on the output, after translating. Thus, word reordering is solved as pre-or post-processing (i.e., pre-ordering or post-ordering) in a monolingual fashion and with unconstrained access to the whole sentence context. Figure 7 (Sudoh et al. 2011) illustrates the workflows of pre-and post-ordering approaches as opposed to standard SMT.
2.4.1 Main Pre-ordering Strategies
A large number of pre-ordering strategies have been proposed. As a first classification, we divide them into deterministic, non-deterministic, and hybrid. Deterministic pre-ordering aims at finding a single optimal permutation of the input sentence, which is then translated monotonically or with a low distortion limit (Nießen and Ney 2001; Xia and McCord 2004; Collins, Koehn, and Kucerova 2005; Popović and Ney 2006; Costa-jussà and Fonollosa 2006; Wang, Collins, and Koehn 2007; Habash 2007; Li et al. 2007; Tromble and Eisner 2009; Xu et al. 2009; Genzel 2010; Isozaki et al. 2010b; Yeniterzi and Oflazer 2010; Khalilov and Fonollosa 2011; Khalilov and Sima'an 2011; Visweswariah et al. 2011; Gojun and Fraser 2012; Yang et al. 2012; Lerner and Petrov 2013; Jehl et al. 2014).10Non-deterministic pre-ordering encodes multiple alternative reorderings into a word lattice and lets a monotonic (usually n-gram–based) decoder choose the best path according to its models (Zens, Och, and Ney 2002; Kanthak et al. 2005; Crego and Marino 2006; Zhang, Zens, and Ney 2007; Rottmann and Vogel 2007; Crego and Habash 2008; Elming and Habash 2009; Niehues and Kolss 2009). A hybrid approach is adopted by Bisazza and Federico (2010) and Andreas, Habash, and Rambow (2011): Rules are used to generate multiple likely pre-orderings, but only for specific language phenomena that are responsible for difficult (long-range) reordering patterns. The sparse reordering lattices produced by these techniques are then translated by a decoder performing additional phrase-based reordering. In a follow-up work, Bisazza and Federico (2012) introduce another way to encode multiple pre-orderings of the input: Instead of generating a word lattice, pre-computed permutations are represented by a modified distortion matrix so that lower distortion costs or “shortcuts” are permitted between selected pairs of input positions.
Pre-ordering methods can also be classified by the kind of pre-ordering rules that they apply: that is, manually written based on linguistic knowledge, or automatically learned from data. We now discuss each of them in detail.
2.4.2 Linguistic Knowledge–Based Pre-ordering
In these approaches, manually written rules determine the transformation of input syntax trees (Collins, Koehn, and Kucerova 2005; Wang, Collins, and Koehn 2007; Xu et al. 2009; Isozaki et al. 2010b; Yeniterzi and Oflazer 2010; Gojun and Fraser 2012; Andreas, Habash, and Rambow 2011) or the permutation of shallow syntactic chunks in a sentence (Hardmeier, Bisazza, and Federico 2010; Durgar El-Kahlout and Oflazer 2010; Bisazza, Pighin, and Federico 2012). In an early example of syntax-based pre-ordering, Collins, Koehn, and Kucerova (2005) propose a set of six rules aimed at arranging German sentences in English-like order. The rules address the position of verbs, verb particles, and negation particles, and they are applied to constituency parse trees. Following a similar approach, Gojun and Fraser (2012) develop a set of rules for the opposite translation direction (English-to-German). Xu et al. (2009) instead propose a simple set of dependency-based rules to pre-order English for translation into subject-object-verb (SOV) languages, which is shown to be effective for Korean, Japanese, Hindi, Urdu, and Turkish. Isozaki et al. (2010b) obtain even better results in an English-to-Japanese task using only one pre-ordering rule (i.e., head finalization) with a parser annotating syntactic heads.
2.4.3 Data-Driven Pre-ordering
This kind of model is learned from sets of pairs (f, f′) where f is a source sentence and f′ is its reference permutation (pre-ordering) inferred from a reference translation e via a word-level alignment.11 These approaches typically require some form of linguistic annotation of the source language, such as syntactic parse trees (Xia and McCord 2004; Habash 2007; Li et al. 2007; Elming and Habash 2009; Genzel 2010; Khalilov and Fonollosa 2011; Khalilov and Sima'an 2011; Yang et al. 2012; Lerner and Petrov 2013; Jehl et al. 2014), shallow syntax chunks (Zhang, Zens, and Ney 2007; Crego and Habash 2008), or POS labels (Crego and Marino 2006; Rottmann and Vogel 2007; Niehues and Kolss 2009; Tromble and Eisner 2009; Visweswariah et al. 2011).
Among the first examples of data-driven tree-based pre-ordering, Xia and McCord (2004) propose a method to automatically learn reordering patterns from a dependency-parsed French–English bitext, using a number of heuristics. While source-side parses are required by their method, target-side parses are optionally used to provide additional constraints during rule extraction. Habash (2007) extracts pre-ordering rules from an Arabic–English parallel corpus dependency-parsed on the source side. In both these works, pre-ordering rules are applied in a deterministic way to preprocess both training and test data. Following a discriminative modeling approach, Li et al. (2007) train a maximum-entropy classifier to pre-order each node with at most three children in the source constituency parse, using a rich set of lexical and syntactic features. Lerner and Petrov (2013) extend this work to pre-order nodes with more children (up to seven on either side of the head) using a cascade of classifiers: first, decide the order of each child relative to the head, then decide the order of left children and that of the right children. As training separate classifiers for each number of children is prone to sparsity issues, Jehl et al. (2014) build a single logistic regression model to predict whether any two sibling nodes should be swapped or not. Then, for each node in the tree, they search for the best permutation of all its children given the pairwise scores produced by the model, using a depth-first procedure. Yang et al. (2012) treat the permutation of each node's children as a ranking problem and model it with ranking support vector machines. As an alternative to deterministic pre-ordering, they also propose using the predicted source permutation to generate soft constraints for the SMT decoder: that is, a penalty that fires whenever the decoder violates the predicted pre-ordering. A tighter integration between source pre-ordering and source-to-target translation is proposed by Dyer and Resnik (2010). In their approach, optimal source pre-orderings (f′) are treated as a latent variable in an end-to-end translation model and the parameters of the tree permutation model are learned directly from parallel data. At test time, alternative permutations of the input tree are encoded as a source reordering forest, which is then translated by a finite-state phrase-based translation model.
Examples of pre-ordering based on shallow syntax include Zhang, Zens, and Ney (2007) and Crego and Habash (2008). In these approaches, automatically extracted chunk pre-ordering rules are used to generate a word reordering lattice of the input sentence, which is then translated by a monotonic phrase or n-gram–based decoder.
In Costa-jussà and Fonollosa (2006), pre-ordering is learned by training a monolingual n-gram based SMT system at the level of word clusters. In Tromble and Eisner (2009), pre-ordering is cast as a permutation problem and solved by a model that estimates the probability of reversing the relative order of any two input words based on their distance as well as lexicalized and POS-based features. In a related work, Visweswariah et al. (2011) obtain smaller models and better results by learning the cost of a given input word appearing right after another, as opposed to anywhere after it (cf. source word-after-word reordering models described in Section 2.1).
2.4.4 On the Limitations of Syntax-based Pre-ordering
Syntax is often regarded as the most effective way to inform reordering in translation. However, empirical work has shown that the success of syntax-based pre-ordering methods can be severely limited by (i) the reachability of reference permutations when parse trees are used to constrain the pre-ordering model, and (ii) the quality of the parser used to learn and apply a pre-ordering model.
With regard to the constraints imposed by syntactic trees (i), Khalilov and Sima'an (2012) conducted oracle pre-ordering experiments across various language pairs. Their results consistently showed that final translation quality was highest by far when no syntactic constraint was imposed on pre-ordering (oracle string). On the contrary, only allowing permutations of siblings of the source parse tree (oracle tree) gave the smallest improvement. Only some of this loss could be recovered by applying specific modifications to the tree before extracting the optimal permutation (oracle modified tree).
With regard to parser accuracy (ii), Green, Sathi, and Manning (2009) analyzed two state-of-the-art parsers (Bikel 2004; Klein and Manning 2003) and reported F-measures of only 55% to 56% at the sub-task of detecting Arabic NP subjects in verb-initial clauses. Similar results were observed by Carpuat, Marton, and Habash (2010) using a dependency parser (Nivre, Hall, and Nilsson 2006). The same study also showed that the correct pre-ordering for Arabic–English translation could not be safely predicted even from gold standard parses, partly because of syntactic transformations occurring during translation. From a manual analysis of their English–German system, Gojun and Fraser (2012) reported that about 10% of the English clauses were wrongly pre-ordered, mostly from source sentence parsing errors. Howlett and Dras (2011) analyzed a reimplementation of the German pre-ordering method of Collins, Koehn, and Kucerova (2005) and found that results could be affected—or even cancelled out—by many factors including choice of training data, quality of the parser, as well as order of the target language model and type of reordering model used during decoding.
Rather than relying on supervised parsers trained on golden treebanks, specific parsers can be induced directly from non-annotated parallel text. In DeNero and Uszkoreit (2011), source sentence reorderings are first inferred from the word alignment with the target translation. Then, a binary parsing model is trained to maximize the likelihood of source trees that can generate such reorderings. Finally, a pre-ordering model is trained to permute each node in the tree. Evaluated on the English–Japanese language pair, this method almost equals the performance of a pre-ordering method based on a supervised parser. Neubig, Watanabe, and Mori (2012) follow a similar approach but build a single ITG-style pre-ordering model treating the parse tree as a latent variable. In the target self-training method of Katz-Brown et al. (2011), a baseline treebank-trained parser is used to produce n-best parses of a parallel corpus's source side. Then, the parses resulting in the most accurate pre-ordering after application of a dependency-based pre-ordering rule set (Xu et al. 2009) are added to the treebank data and used to re-train the baseline parser.
2.4.5 Post-ordering
A somewhat smaller line of research has instead treated reordering as post-processing. In Bangalore and Riccardi (2000) and Sudoh et al. (2011), target words are reordered after a monotonic translation process. Other work has focused on rescoring a set of n-best translation candidates produced by a regular PSMT decoder—for instance, by means of POS-based reordering templates (Chen, Cettolo, and Federico 2006) or word-class specific distortion models (Gupta, Cettolo, and Federico 2007). Chang and Toutanova (2007) use a dependency tree reordering model to generate n alternative orders for each 1-best sentence produced by the SMT system. Each set of n sentence reorderings is then reranked using a discriminative model trained on word bigram features and standard word reordering features (i.e., distance or orientation between consecutively translated input words).
Focusing on Japanese-to-English translation, Sudoh et al. (2011, 2013) proposed to “translate” foreign-order English into correct-order English using a monolingual phrase-based (Sudoh et al. 2011) or syntax-based (Sudoh et al. 2013) SMT system trained for this specific subtask.12 The underlying motivation is that, while English-to-Japanese is well handled by pre-ordering with the aforementioned head-finalization rule (Isozaki et al. 2010b), it is much harder to predict the English-like order of Japanese constituents for Japanese-to-English translation. Post-ordering addresses this issue by generating head-final English (HFE) sentences that are used to create a HFE-to-English parallel corpus. Goto, Utiyama, and Sumita (2012, 2013) solve post-ordering by parsing the HFE sentences into binary trees annotated with both syntactic labels and ITG-style monotone/swap labels. Hayashi et al. (2013) improve upon this work with a shift-reduce parser that efficiently integrates non-local features like n-grams of the post-ordered string.
Also related to post-ordering is the work on right-to-left or reverse decoding by Watanabe and Sumita (2002), Finch and Sumita (2009), and Freitag et al. (2013). Here, the target sentence is built up from the last word to the first, thereby altering language model context and reordering search space. Finch and Sumita obtain best results on a wide range of language pairs by combining the outputs of standard and reverse decoding systems.
3. Evaluating Word Reordering in Statistical Machine Translation
Because there are innumerable ways to correctly render a source sentence's meaning in the target language, automatically evaluating translation quality is a complex problem. Generally, SMT systems are judged by the extent to which their outputs resemble a set of reference translations produced by different human translators. Despite relying on a very rough approximation of language variability, this approach provides SMT researchers with fast automatic metrics that can guide, at least in part, their steps towards improvement. Besides, fast evaluation metrics are used to automatically tune SMT feature weights on a development corpus—for instance, by means of minimum error rate training procedures (Och 2003). The design of MT evaluation metrics correlating with human judgments is an active research area. Here we briefly survey two widely used general-purpose metrics, BLEU and METEOR, and then describe in more detail a number of reordering-specific metrics.
3.1 General-Purpose Metrics
BLEU (Papineni et al. 2001) is a lexical match–based score that represents the de facto standard for SMT evaluation. Here, proximity between candidate and reference translations is measured in terms of overlapping word n-grams, with n typically ranging from 1 to 4. For each order n a modified precision score (see Papineni et al. [2001] for details) is computed on the whole test set and combined in a geometric mean. The resulting score is then multiplied by a brevity penalty that accounts for length mismatches between reference and candidate translations. Al-Onaizan and Papineni (2006) use BLEU to measure word order similarity between two languages: that is, by computing the BLEU score between the original target sentence e and a source-like permutation of e. Using n-grams, though, is a limited solution to the problem of word ordering evaluation. First, because only exact surface matches are counted, without any consideration of morphology or synonymy. Second, because the absolute positioning of words in the sentence is not captured, but only their proximity within a small context.
The former issue is addressed to some extent by METEOR (Banerjee and Lavie 2005), which relies on language-specific stemmers and synonymy modules to go beyond the surface-level similarity. As for word order, METEOR treats it separately with a fragmentation penalty proportional to the smallest number of chunks that the hypothesis must be divided into to align with the reference translation. This quantity can be interpreted as the number of times that a human reader would have to “jump” between words to recover the correct translation order. However, no distinction is made between short and long-range reordering errors.
The weakness of BLEU and METEOR with respect to word order was demonstrated by Birch, Osborne, and Blunsom (2010) with a significant example that we report in Table 2. For simplicity, the example assumes that the reference order is monotonic and that hypotheses and reference translations contain exactly the same words. According to both metrics, hypothesis (a) is worse than (b), although in (a) only two adjacent words are swapped whereas in (b) the two halves of the sentence are swapped.
3.2 Reordering-Specific Metrics
To overcome the aformentioned limitations, Birch, Osborne, and Blunsom (2010) propose directly measuring the similarity between the reorderings needed to reach the reference translations from the source sentence and those applied by the decoder to produce the candidate translation. In practice, this is done by first converting word alignments to permutations using simple heuristics to handle null and multiple alignments, and then computing a permutation distance among the resulting permutations. Among various metrics proposed in the paper, the square root of the Kendall's Tau was shown to be reliable and highly correlated with human judgments.
In a different approach called RIBES, Isozaki et al. (2010a) propose directly measuring the reordering occurring between the words of the hypothesis and those of the reference translation, thereby eliminating the need to word-align input and output sentence. A limitation of this approach is that only identical words contribute to the score. As a solution, the permutation distance is multiplied by a word precision score that penalizes hypotheses containing few reference words. Nevertheless, the resulting metric assigns different scores to hypotheses that differ in their lexical choice, but not in their word reordering.
Talbot et al. (2011) introduce yet another reordering-specific metric, called fuzzy reordering score (FRS) which, like the KRS, is independent from lexical choice and measures the similarity between a sentence's reference reordering and the reordering produced by an SMT system (or by a pre-ordering technique). However, whereas Birch, Osborne, and Blunsom (2010) used Kendall's tau between the two sentence permutations, Talbot et al. count the smallest number of chunks that the hypothesis permutation must be divided into to align with the reference permutation. This corresponds precisely to the fragmentation penalty of METEOR except that the alignment is performed between permutations and not between translations. Like METEOR, FRS makes no difference between short and long-range reordering errors (cf. Table 2).
Stanojević and Sima'an (2014b) argue for a hierarchical treatment of reordering evaluation, where word sequences can be grouped recursively into larger blocks. To this end, they factorize the output-reference reordering into a Permutation Tree (Zhang and Gildea 2007), whose nodes represent atomic permutations. Given this factorization, the counts of monotone (1 2) versus other permutation nodes—(2 1), (3 1 4 2), and so on— are used as features in a linear model of translation quality (BEER) trained to correlate with the human ranking of a set of MT system outputs. With reference to Table 2, the permutation trees of both hypotheses (a) and (b) would contain only one swapping node leading to the same reordering score. Stanojević and Sima'an (2014a) extend this work with a stand-alone reordering metric that considers all possible tree factorizations of a permutation (permutation forest) and that gives recursively less importance to lower nodes in the tree (i.e., covering smaller spans). Hierarchical permutation metrics are shown to better correlate with human judgments than string-based permutation metrics like Kendall's tau distance K.
4. Reordering Phenomena in Natural Languages
Understanding the complexity of reordering in a given language pair is key to selecting the right SMT models and to improving them. To date, word reordering phenomena in natural languages have mainly been analyzed from a quantitative perspective (Birch, Osborne, and Koehn 2008; Birch, Blunsom, and Osborne 2009). While measuring the amount of reordering is certainly important, understanding which kinds of reordering occur in a given language pair is also essential. To this end, we present a qualitative analysis of word reordering based on linguistic knowledge. More specifically, we draw on a large body of syntactic information collected by linguists from more than 1500 languages, and systematized in the World Atlas of Language Structures (WALS) (Dryer and Haspelmath 2011).13
Following the seminal work of language typologist Matthew S. Dryer, we describe the word order profile of a language by the canonical orders of its constituent sets (word order features). The resulting language pair classification is primarily based on the order of subject, object and verb, and further refined according to the order of several other element pairs, such as noun-adjective, verb-negation, and so forth. We then compare the word order features of several languages that were studied in the SMT field, and show that empirical results generally confirm the existing theoretical knowledge.
4.1 A Qualitative Analysis
The amount of word reordering found in a language pair is known to be a good predictor of SMT performance. Birch, Osborne, and Koehn (2008) considered three variables—reordering quantity, morphological complexity, and historical relatedness—and found the first to have the highest correlation with the BLEU scores of a standard PSMT system on a sample of 110 European language pairs. Birch, Blunsom, and Osborne (2009) further analyzed the distribution of different reordering widths in Arabic–English and Chinese–English, and the ability of two SMT approaches to model them. They found that the PSMT approach is more suitable for language pairs where most reordering is local (Arabic–English), while the hierarchical approach is stronger when medium-range reorderings are dominant (Chinese–English). Still, both PSMT and HSMT failed to capture most of the long-range reorderings found in the reference corpora.
These findings are indeed relevant to our work, but we believe there is also much to learn from theoretical linguistic knowledge. Moreover, a quantitative analysis can suffer from noise in the data, typically originating from automatic word alignments. Birch, Blunsom, and Osborne (2009) used manual word alignment in their study, but this kind of resource is available only for very few language pairs. Noise can also be due to what we can call optional reordering: Human translators often choose to restructure the sentence according to genre conventions or to their personal style, even when this is not required by the target language grammar. Here is an example:
As also noted by Fox (2002), this kind of reordering is not strictly necessary to produce accurate and fluent translations, but its occurrence in parallel corpora affects the automatic reordering measures.
4.2 Word Order Profiles
The word order profile of a language is determined by the canonical order of its constituent sets, or word order features. In general, the basic or canonical order of a constituent set can be established by criteria of frequency (the most common), distribution (the one with the least restricted usage), or pragmatics (the neutral one) (Dryer 2007). Although some languages are said to have free (or flexible) order, it is often possible to detect one that is dominant and neutral. Consider, for instance, English, a SVO language where other orders are used, but only to achieve specific emphasis or topicalization effects:
- (1)
a. I saw the cat.
- b.
The cat, I saw.
- (2)
a. the tail of the cat
- b.
the cat's tail
Based on Dryer (2007) and on the availability of data points in the WALS, we have established a set of 13 core features to determine the word order profile of a language. For the purpose of describing word order differences between language pairs, we have divided the features into two broad categories: clause-level and phrase-level.14 An English example for each feature is provided in Table 3.
. | Indo-European . | Afro-Asiatic . | Altaic . | Japanese . | Sino-Tibetan . | ||
---|---|---|---|---|---|---|---|
Germanic . | Romance . | Semitic . | Turkic . | Japanese . | Chinese . | ||
Features . | English . | German . | French . | Arabic . | Turkish . | Japanese . | Chinese . |
S-V-O | S-V-O/ | S-V-O | V-S-O/ | S-O-V | S-O-V | S-V-O | |
S-O-V | S-V-O* | ||||||
V-O-X | mixed | V-O-X | V-O-X | X-O-V | X-O-V | X-V-O | |
N-Rel | N-Rel | N-Rel | N-Rel* | Rel-N | Rel-N | Rel-N | |
Sub-C | Sub-C | Sub-C | Sub-C | C-Sub/ | C-Sub | mixed** | |
Sub-C | |||||||
none | none | initial | initial | final | final | final | |
initial | initial | initial | initial* | other | other | other | |
Neg-V | Neg-V/ | Neg-V-Neg/ | Neg-V | V-Neg | V-Neg | Neg-V | |
V-Neg | V-Neg | ||||||
Adp-N | Adp-N | Adp-N | Adp-N | N-Adp | N-Adp | N-Adp/ | |
Adp-N | |||||||
N-Gen/ | N-Gen | N-Gen | N-Gen | Gen-N | Gen-N | Gen-N | |
Gen-N | |||||||
A-N | A-N | N-A | N-A | A-N | A-N | A-N | |
Dem-N | Dem-N | Dem-N | Dem-N | Dem-N | Dem-N | Dem-N | |
Num-N | Num-N | Num-N | Num-N | Num-N | Num-N | Num-N | |
Deg-A | Deg-A | Deg-A | A-Deg | Deg-A | Deg-A | Deg-A | |
. | Indo-European . | Afro-Asiatic . | Altaic . | Japanese . | Sino-Tibetan . | ||
---|---|---|---|---|---|---|---|
Germanic . | Romance . | Semitic . | Turkic . | Japanese . | Chinese . | ||
Features . | English . | German . | French . | Arabic . | Turkish . | Japanese . | Chinese . |
S-V-O | S-V-O/ | S-V-O | V-S-O/ | S-O-V | S-O-V | S-V-O | |
S-O-V | S-V-O* | ||||||
V-O-X | mixed | V-O-X | V-O-X | X-O-V | X-O-V | X-V-O | |
N-Rel | N-Rel | N-Rel | N-Rel* | Rel-N | Rel-N | Rel-N | |
Sub-C | Sub-C | Sub-C | Sub-C | C-Sub/ | C-Sub | mixed** | |
Sub-C | |||||||
none | none | initial | initial | final | final | final | |
initial | initial | initial | initial* | other | other | other | |
Neg-V | Neg-V/ | Neg-V-Neg/ | Neg-V | V-Neg | V-Neg | Neg-V | |
V-Neg | V-Neg | ||||||
Adp-N | Adp-N | Adp-N | Adp-N | N-Adp | N-Adp | N-Adp/ | |
Adp-N | |||||||
N-Gen/ | N-Gen | N-Gen | N-Gen | Gen-N | Gen-N | Gen-N | |
Gen-N | |||||||
A-N | A-N | N-A | N-A | A-N | A-N | A-N | |
Dem-N | Dem-N | Dem-N | Dem-N | Dem-N | Dem-N | Dem-N | |
Num-N | Num-N | Num-N | Num-N | Num-N | Num-N | Num-N | |
Deg-A | Deg-A | Deg-A | A-Deg | Deg-A | Deg-A | Deg-A | |
4.2.1 Clause-Level Order Features.
- •
Subject, Object, Verb [WALS feature 81A] The first and most important feature is the “ordering of subject, object, and verb in a transitive clause, more specifically declarative clauses in which both the subject and object involve a noun (and not just a pronoun)” (Dryer 2011). For instance, English and French are SVO languages, whereas Turkish is SOV. The distribution of main word order types in a large sample of world languages is given in Table 4. This feature is often used alone to denote the word order profile of a language, because it can be a good predictor of several other features.
- •
Oblique or Adpositional Phrase [WALS feature 84A] This feature refers to the position of a phrase functioning as an adverbial modifier of the verb, relative to the position of the object and verb. For instance, English is VOX because it places oblique phrases after the verb and object.
- •
Noun and Relative Clause [WALS feature 90A] Order of the relative clause with respect to the noun it modifies.
- •
Adverbial Subordinator and Subordinate Clause [WALS feature 94A] Subordinators are used to link adverbial subordinate clauses to the main clause. They can take the form of verbal suffixes or separate words, such as the English subordinating conjunctions when and because.
- •
Polar Question Particle [WALS feature 92A] In many languages, polar (yes/no) questions are signaled by specific particles. This feature denotes their position in the sentence (not defined for English).
- •
Content Question Phrase [WALS feature 93A] Content questions are characterized by the presence of an interrogative word or phrase (e.g., who, which one). In some languages, like English, these are always placed at the beginning of the sentence. In some others, like Turkish, they take the position of the constituent they replace: For instance, the word ‘ne/what’ replacing the object naturally occurs between subject and verb.
- •
Negation and Verb [WALS feature 143A] Order of the negative word or morpheme15 with respect to the main verb. Note that more than one word or morpheme may be necessary to express negation (e.g., ‘ne … pas’ in French).
4.2.2 Phrase-Level Order Features.
- •
Noun and Adpositions [WALS feature 85A] Whether a language uses mainly prepositions or postpositions.
- •
Noun and Genitive [WALS feature 86A] Order of genitive or possessor noun phrase with respect to the head noun.
- •
Noun and Adjective [WALS feature 87A] Order of adjectives with respect to the noun they modify.
- •
Noun and Demonstrative [WALS feature 88A] Order of demonstrative words (e.g., this, that) or affixes with respect to the noun they modify.
- •
Noun and Numeral [WALS feature 89A] Order of cardinal number words with respect to the noun they modify.
- •
Adjective and Degree Word [WALS feature 91A] Order of degree words (e.g., very, more) with respect to the adjective they modify.
4.2.3 Language Sample
For our study, we have chosen seven widely spoken languages. These are English, German, French, Arabic (Modern Standard), Turkish, Japanese, and Chinese (Mandarin). Mainly based on the WALS, we have summarized the word order feature values for all these languages in Table 3. Whenever possible, features were assigned one (or two) values corresponding to the dominant order(s) in that language. When no particular order was given as dominant we marked it as “mixed.”
The main word order of German and Arabic deserves a special mention. In German, the positioning of subject, object, and verb is syntactically determined: main clauses without auxiliary verb are SVO, while subordinate clauses and clauses containing an auxiliary are SOV. A further complication, not marked in Table 3, is that the German finite verb must be placed in second position, in which case the pattern becomes SAuxOV, with the object intervening between auxiliary and main verb. As regards Arabic, whereas the WALS classifies Modern Standard Arabic as VSO, the corpora typically used in SMT show a very mixed distribution of VSO and SVO clauses.16 Carpuat, Marton, and Habash (2012) examined the Arabic–English Treebank and found that, when the subject is expressed, it follows the verb in 70% of the cases, but precedes it in 30%. Similarly, in the Pennsylvania Arabic Treebank, they found an order distribution of 67% VS and 33% SV. Besides frequency, it can be noted that the SVO sentences attested in these corpora are in general pragmatically neutral. We conjecture that this variability in Modern Standard Arabic may be due to the effect of spoken language varieties such as Egyptian, Gulf, Kuwaiti, Iraqi (all listed as SVO by the WALS), and Syrian (listed as VSO/SVO). For these reasons, we classify Arabic as a mixed VSO/SVO language.
It is worth noting that our seven-language sample covers the main word order types of the large majority of the world languages: namely, SOV, SVO, and VSO (see Table 4).
4.3 Word Order Differences
Linguistically motivated word order profiles can be very helpful anticipating the kind of word reordering problems that an SMT system will have to face. Clearly, these will also vary in relation to the text genre (written news, speeches, etc.) and to the translation's style and degree of literality. However, we can reasonably expect the syntactic properties of the two languages to determine the general reordering characteristics of that pair.
We will now analyze the reordering characteristics of seven language pairs: English paired with the other six languages presented in Table 3, as well as the French and Arabic pair. To this end, we propose the following analysis procedure. As a first indication of reordering complexity, we look at the main word order feature (subject, object, verb). A difference at this level typically results in poor SMT performances. Then, we count the total number of discordant features. To simplify, if a particular element does not exist in a language (e.g., polar question particles in English) we count zero difference for that feature, and when one of the languages has a mixed order we count a half difference. We insist, however, on the qualitative nature of our analysis: Numbers are only meaningful in combination with the list of specific discordant features, as these have a different impact on word reordering. In particular, we find it essential for SMT to distinguish between clause-level and phrase-level differences (CDiff and PDiff) because the former account for most longer-range word movements, and the latter for the shorter. Thus, a language pair with only phrase-level discordant features is likely to be suitable for a PSMT approach, where reordering is managed through local distortion or inside translation units. On the contrary, the presence of many clause-level differences typically calls for a tree-based solution, either at preprocessing or at decoding time. As we will see, some pairs lie on the borderline, with only one or few clause-level differences. Finally, it should be noted that, even among features of the same group, some have more impact on SMT than others due to their frequency or to the average length of their constituents. For instance, the order of noun and genitive is more important than that of adjective and degree word.
English and German [ Main order: different; CDiff: 1.5; PDiff: 0.5 ] The main word order of German is SVO or SOV, according to the syntactic context (cf. Section 4.2). German also differs from English with respect to the position of oblique phrases and that of the negation: Both are fixed in English but mixed in German. At the phrase level, German predominantly places the genitive after the noun, while English displays both orders.
Thus, despite belonging to the same family branch (Indo-European/Germanic), this pair displays complex reordering patterns. Indeed, German–English reordering has been widely studied in SMT and is still an open topic. At the Workshop on Statistical Machine Translation 2014 (Bojar et al. 2014), a syntax-based string-to-tree SMT approach (Williams et al. 2014) won in both language directions (official results excluding online systems). At the International Workshop on Spoken Language Translation 2014 (Cettolo et al. 2014), the best submission was a combination of PSMT with POS-and syntax-based preordering (Slawik et al. 2014), string-to-tree syntax-based SMT, and factored PSMT (Birch et al. 2014).
English and French [ Main order: same; CDiff: 0.5; PDiff: 1.5 ] Most clause-level features have the same values in French as in English, except for the negation, which is typically expressed by two words in French: one preceding and one following the verb.17 At the phrase level, differences are found in the location of genitives and adjectives. Thus, English and French have very similar clause-level orders, but reordering is abundant at the local level.
This is a case where reordering is mostly well handled by string-based PSMT. As a reference, the three top English-to-French WMT14 systems (official results excluding online systems) were all phrase-based. A similar trend was observed in the French-to-English track.
English and Arabic [ Main order: different; CDiff: 0.5; PDiff: 2.5 ] The dominant Arabic order is VSO, followed by SVO (cf. Section 4.2). Apart from this important difference, all other clause-level features agree between Arabic and English. At the phrase level, differences are found in genitives, adjectives, and degree words.
As a result, reordering is overwhelmingly local but few crucial long-range reorderings also regularly occur. Thus, this pair is challenging for PSMT but, at the same time, not well suited for a tree-based approach. As shown by Zollmann et al. (2008) and Birch, Blunsom, and Osborne (2009), PSMT performs similarly or better than HSMT for the Arabic-to-English language pair. However, HSMT was shown to better cope with the reordering of VSO sentences (Bisazza 2013). Pre-ordering of Arabic VSO sentences for translation into English has proved to be a particularly difficult task (Green, Sathi, and Manning 2009; Carpuat, Marton, and Habash 2010) and has inspired work on hybrid pre-ordering where multiple verb pre-orderings are fed to a PSMT decoder (Bisazza and Federico 2010; Andreas, Habash, and Rambow 2011); see also Section 2.4.
English and Turkish [ Main order: different; CDiff: 5.5; PDiff: 1.5 ] Turkish is a good example of head-final language, except for the fact that it can use both clause-final and clause-initial subordinators.18 As a result, almost all clause-level features are discordant in this pair. At the phrase level, Turkish mainly differs from English for the use of postpositions instead of prepositions. Among our language pairs, this is one of the most difficult to reorder for an SMT system. The complex nature of its reordering phenomena suggests a good fit for tree-based SMT approaches, and indeed, HSMT was shown to significantly outperform PSMT between Turkish and English in both language directions (Ruiz et al. 2012; Yılmaz et al. 2013). However, state-of-the-art SMT quality in this language pair is still very low, mostly because of the agglutinative nature of Turkish, which makes it difficult to tear apart word reordering issues from rich morphology issues. Attempting to address both issues in an English-to-Turkish factored PSMT system, Yeniterzi and Oflazer (2010) pre-process the parsed English side with a number of syntax-to-morphology mapping rules and constituent pre-ordering rules dealing with local and global reordering phenomena, respectively. Only the former, though, resulted in better translation quality.
English and Japanese [ Main order: different; CDiff: 6; PDiff: 1.5 ] Japanese is the prototypical example of head-final language. In this pair all clause-level features are discordant, whereas at the phrase level, Japanese differs from English for the use of postpositions and the strictly head-final genitive construction. This pair, like the previous one, is extremely challenging for PSMT because of the hierarchical nature of its reordering phenomena and the high frequency of long-range word movements. Indeed, translation between English and Japanese has spurred a remarkable amount of work on pre-ordering, post-ordering, and decoding-time reordering. In 2013 the PatentMT evaluation campaign of the NTCIR conference (Goto et al. 2013a) saw rule-based and hybrid systems largely outperform the purely statistical ones in Japanese-to-English. The highest-ranked SMT submission was actually a combination of three SMT systems, including a baseline PSMT method, a rule-based pre-ordering method, and a post-ordering method based on string-to-tree syntax-based SMT (Sudoh et al. 2013). Interestingly, the trends were different in the opposite translation direction, English-to-Japanese, where all rule-based MT systems were significantly outperformed by a PSMT system that performed pre-ordering of the English input with few manual rules for head finalization based on dependency parse trees (Sudoh et al. 2013).
English and Chinese [ Main order: same; CDiff: 3.5; PDiff: 1 ] Despite belonging to the same main order type, these two languages differ in the positioning of oblique phrases, relative clauses, interrogative phrases, and subordinating words.19 Moreover, word order variations are quite common in Chinese to mark the topic of a sentence, (i.e., what is being talked about). Comparing the two languages at the phrase level, we find partial disagreement in the use of genitive and adpositions (Chinese has both prepositions and postpositions). Thus, this pair too is characterized by very complex reordering, hardly manageable by a PSMT system. This is confirmed by a number of empirical results showing that tree-based approaches (particularly HSMT) consistently outperform PSMT in Chinese-to-English evaluations (Zollmann et al. 2008; Birch, Blunsom, and Osborne 2009). It is worth noting that translation between Chinese and English has been the main motivation and test bed for the development of HSMT.
French and Arabic [ Main order: different; CDiff: 1.5; PDiff: 1 ] At the clause level, this pair differs in main word order (SVO versus VSO or SVO) like the English–Arabic pair, but also in the order of negation and verb. On the other hand, phrase-level order is notably more similar, with only one discordant feature of minor importance (adjective and degree word).
Less research was published on this language pair. Nevertheless, Hasan and Ney (2008) and Schwenk and Senellart (2009) chose a PSMT approach to experiment with an Arabic-to-French task.
Figure 8 illustrates the reordering characteristics of three language pairs by means of sentence examples that were automatically word-aligned with GIZA++ (Och and Ney 2003) (intersection of direct and inverse alignments). In the first row, we see two English–German sentence pairs; in both cases, most of the points lie close to the diagonal representing an overall monotonic translation, whereas few isolated points denote the very long-range reordering of verbs. Similarly, in the two English–Arabic sentence pairs, we mostly observe local reorderings, with the exception of few isolated points corresponding to the Arabic clause-initial verbs. Finally, the two Turkish–English examples display global reordering, due to the high number of clause-level order differences.
Where possible, it is interesting to relate our analysis with previously published measures of reordering based on parallel data. To our knowledge, the most comprehensive results of this kind are reported by Birch (2011), who formulates reordering as a binary process occurring between two blocks that are adjacent in the source (cf. ITG constraints in Section 2.1). Here, the general amount of reordering in a language pair is estimated by the RQuantity, defined as the sum of the spans of all the reordered blocks on the target side, normalized by the length of the target sentence and averaged over a corpus. Based on the Europarl corpus (Koehn 2002) and automatic word alignments, Birch (2011) reports average RQuantity values of 0.586/0.608 in English-to-German/German-to-English, versus only 0.402/0.395 in English-to-French/French-to-English. The manually aligned GALE corpus (LDC2006E93) is instead used to measure the distribution of reordering widths, defined as the sum of the swapped blocks' target spans. Widths are binned into short (2–4 words), medium (5–8), and long (>8). In Chinese-to-English there are about 0.8/0.9/0.9 short/medium/long reordered blocks per sentence, whereas in Arabic-to-English there are 1.1/0.4/0.2 short/medium/long reordered blocks per sentence. These figures align nicely with our classification of phrase-and clause-level differences, which we have related to longer and shorter-range reordering, respectively: Chinese-to-English (PDiff: 1, CDiff: 3.5) displays much more reordering overall, whereas Arabic-to-English (PDiff: 2.5, CDiff: 0.5) has more short reorderings but much fewer medium and short.
The advantage of using our proposed analysis is that it can be easily extended to other language pairs thanks to the wide coverage of WALS, whereas data-driven analyses depend on the availability of high-quality word-aligned parallel corpora.
5. Discussion and Conclusions
We have provided a comprehensive overview of how the word reordering problem is modeled within different string-based and tree-based SMT frameworks, and as a stand-alone task. To summarize, string-based SMT considers all permutations of the source sentence and relies on separate reordering models to score them. On the other hand, tree-based SMT tightly couples reordering to translation and, during decoding, only or mostly considers word permutations that are licensed by the learned translation model. In practice, both approaches apply general heuristic constraints on the maximum reordering width to avoid explosion of the search space.
The main weakness of a string-based approach like phrase-based SMT (PSMT) with regard to reordering lies in its coarse definition of the reordering search space. In this framework, relaxing the distortion limit means dramatically increasing the size of the search space, making the reordering model's task extremely complex and intensifying the risk of both search and model errors. As a result, PSMT is generally good at handling local reordering but largely fails to capture long-range reordering phenomena.
As for tree-based SMT, a distinction must be made between methods that extract hierarchical structure directly from parallel data and methods that rely on syntactic annotation provided by pre-trained monolingual parsers. A prominent example of the former is hierarchical phrase-based SMT (HSMT), which models reordering via partially lexicalized translation rules. Although this results in a more principled definition of the reordering search space, HSMT lacks the ability to generalize the learned reordering patterns from specific lexical clues to whole word or phrase categories.
Finally, reordering may be constrained by syntactic information in the source or target language, or both. When syntax is used in the source language, reordering is performed by transforming a given parse tree of the input sentence. When syntax is used in the target language, reordering is allowed only if resulting in a grammatically valid target tree fragment. Syntactic information is adopted by both syntax-based SMT, where the tree is reordered and translated simultaneously, and by syntactic pre-ordering (or post-ordering) methods, where the tree is transformed before (or after) translation. The success of these approaches largely depends on the degree of isomorphism of the modeled language pair, as well as on the parser's performance, which can vary substantially across languages.
After describing how word reordering is modeled in SMT, we have questioned why different language pairs appear to need different reordering modeling solutions. To answer this question, we have outlined the word order profiles of seven widely spoken languages, based on a large body of linguistic knowledge. Then we have examined their pairwise differences in detail. Finally, we have used these differences to interpret the empirical findings of previous work that evaluated various SMT reordering techniques in those language pairs.
We conclude from our analysis that a few linguistic facts can be very useful to predict the reordering characteristics of a language pair and to select the SMT approach that best suits them. In particular, string-based PSMT is preferable for language pairs with only constituent-level differences, like French–English, as these mostly imply short-or medium-range reordering patterns that can be captured by local distortion. On the other hand, language pairs with many clause-level order differences (e.g., Japanese–English, Turkish–English, Chinese–English) are best handled by tree-based SMT or syntax-based pre-/post-ordering approaches that can handle complex, hierarchical reordering patterns. While this may seem obvious, we notice that, in the literature, the choice of an optimal SMT framework for a new translation task is often driven by costly empirical trials rather than by linguistic knowledge. Finally, the pairs with mostly constituent-level differences and only one or few clause-level differences (e.g., German–English and Arabic–English) do not fit well into either category. In sentences without global reordering, HSMT can underperform PSMT, likely because of the much larger search space explored. At the same time, applying PSMT to such pairs with heuristic reordering constraints can lead to systematic errors in the positioning of important elements of the sentence, such as verbs. Not surprisingly, these language pairs have been the object of a fair amount of work aimed at refining the reordering space of both PSMT and HSMT. Our word order analysis can be easily extended to other language pairs, using the methodology presented in Section 4.
In conclusion, finding a definitive solution to the problem of word reordering implies answering the fundamental research questions of SMT: Is structure needed to translate? If so, what kind of structure and how should it be used? A growing part of the research community has converged on a positive answer to the former question, but the latter remains open to date. While the field keeps evolving around these questions, SMT has already reached the stage of applied language technology. We hope this survey will provide practical guidelines to the system developers of today and, at the same time, good scientific references to the researchers elaborating the solutions of tomorrow.
Acknowledgments
We would like to thank Alexandra Birch, Marta R. Costa-jussà, Nadir Durrani, Chris Dyer, Adrià de Gispert, Isao Goto, Spence Green, Zhongqiang Huang, Maxim Khalilov, Graham Neubig, Khalil Sima'an, Miloš Stanojević, Katsuhito Sudoh, Christoph Tillmann, Taro Watanabe, and Richard Zens, as well as the anonymous reviewers, for providing valuable feedback on an earlier version of this survey.
Notes
Nevertheless, learning and understanding a new language has been shown to be more difficult when the new language is structurally distant from one's native language (Corder 1979).
Peer-reviewed conferences, workshops, and journal papers listed by the Machine Translation Archive: http://www.mt-archive.info/srch/subjects.htm.
Automatic measures of translation quality are discussed in Section 3.
For a comparative study of the IBM and ITG constraints, we refer the reader to Zens and Ney (2003).
Empirical evidence against this was presented by Wellington, Waxmonsky, and Melamed (2006).
The reordering models mentioned herein are explained in detail in the next section.
Some phrase orientation models further distinguish between discontinuous left and discontinuous right.
More pre-ordering techniques will be discussed in Section 2.4.
Two other models utilizing function words as the anchors of global reordering decisions are proposed in Setiawan et al. (2013) and Setiawan, Zhou, and Xiang (2013). Although integrated in a syntax-based system (Shen, Xu, and Weischedel 2010), these models are in principle applicable to other SMT frameworks such as HSMT.
Li et al. (2007) experiment with a small number of n-best pre-orderings given as alternative inputs to the SMT system.
Note the similarity to the pre-ordering approach of Costa-jussà and Fonollosa (2006), except that here the monolingual SMT process is applied to the target language after a monotonic translation phase.
In this section, phrase is used in its traditional syntactic sense—i. e., a group of words forming a constituent—as opposed to the notion of data-driven phrase adopted by phrase-based SMT.
Unlike the WALS, we do not distinguish between negative words and affixes for this feature.
VOS order is also admitted in Arabic, but only in specific contexts (e.g., when the object is expressed by a pronoun).
Pre-verbal negation can be omitted in colloquial French.
In Turkish, non-finite subordinate clauses are typically placed before the main clause and linked to it by a clause-final subordinator (e.g., raǧmen/although), whereas finite subordinate clauses can be placed after the main clause and introduced by a clause-initial subordinator (e.g., ama/but). The former is dominant in written language.
Subordinating words in Chinese can occur at the beginning of the subordinate clause, at its end, or even inside it (Li 2008).
References
Author notes
Informatics Institute, University of Amsterdam, Science Park 904, 1098 XH Amsterdam, The Netherlands. E-mail: [email protected].
Fondazione Bruno Kessler, Via Sommarive 18, 38123 Povo, Trento, Italy. E-mail: [email protected].