Abstract
Linguistic typology aims to capture structural and semantic variation across the world’s languages. A large-scale typology could provide excellent guidance for multilingual Natural Language Processing (NLP), particularly for languages that suffer from the lack of human labeled resources. We present an extensive literature survey on the use of typological information in the development of NLP techniques. Our survey demonstrates that to date, the use of information in existing typological databases has resulted in consistent but modest improvements in system performance. We show that this is due to both intrinsic limitations of databases (in terms of coverage and feature granularity) and under-utilization of the typological features included in them. We advocate for a new approach that adapts the broad and discrete nature of typological categories to the contextual and continuous nature of machine learning algorithms used in contemporary NLP. In particular, we suggest that such an approach could be facilitated by recent developments in data-driven induction of typological knowledge.
1. Introduction
The world’s languages may share universal features at a deep, abstract level, but the structures found in real-world, surface-level texts can vary significantly. This cross-lingual variation has challenged the development of robust, multilingually applicable Natural Language Processing (NLP) technology, and as a consequence, existing NLP is still largely limited to a handful of resource-rich languages. The architecture design, training, and hyper-parameter tuning of most current algorithms are far from being language-agnostic, and often inadvertently incorporate language-specific biases (Bender 2009, 2011). In addition, most state-of-the-art machine learning models rely on supervision from (large amounts of) labeled data—a requirement that cannot be met for the majority of the world’s languages (Snyder 2010).
Over time, approaches have been developed to address the data bottleneck in multilingual NLP. These include unsupervised models that do not rely on the availability of manually annotated resources (Snyder and Barzilay 2008; Vulić, De Smet, and Moens 2011, inter alia) and techniques that transfer data or models from resource-rich to resource-poor languages (Padó and Lapata 2005; Das and Petrov 2011; Täckström, McDonald, and Uszkoreit 2012, inter alia). Some multilingual applications, such as Neural Machine Translation and Information Retrieval, have been facilitated by learning joint models that learn from several languages (Ammar et al. 2016; Johnson et al. 2017, inter alia) or via multilingual distributed representations of words and sentences (Mikolov, Le, and Sutskever 2013, inter alia). Such techniques can lead to significant improvements in performance and parameter efficiency over monolingual baselines (Pappas and Popescu-Belis 2017).
Another, highly promising source of information for modeling cross-lingual variation can be found in the field of Linguistic Typology. This discipline aims to systematically compare and document the world’s languages based on the empirical observation of their variation with respect to cross-lingual benchmarks (Comrie 1989; Croft 2003). Research efforts in this field have resulted in large typological databases—for example, most prominently the World Atlas of Language Structures (WALS) (Dryer and Haspelmath 2013). Such databases can serve as a source of guidance for feature choice, algorithm design, and data selection or generation in multilingual NLP.
Previous surveys on this topic have covered earlier research integrating typological knowledge into NLP (O’Horan et al. 2016; Bender 2016). However, there is still no consensus on the general effectiveness of this approach. For instance, Sproat (2016) has argued that data-driven machine learning should not need to commit to any assumptions about categorical and manually defined language types as defined in typological databases.
In this article, we provide an extensive survey of typologically informed NLP methods to date, including the more recent neural approaches not previously surveyed in this area. We consider the impact of typological (including both structural and semantic) information on system performance and discuss the optimal sources for such information. Traditionally, typological information has been obtained from hand-crafted databases and, therefore, it tends to be coarse-grained and incomplete. Recent research has focused on inferring typological information automatically from multilingual data (Asgari and Schütze 2017, inter alia), with the specific purpose of obtaining a more complete and finer-grained set of feature values. We survey these techniques and discuss ways to integrate their predictions into the current NLP algorithms. To the best of our knowledge, this has not yet been covered in the existing literature.
In short, the key questions our paper addresses can be summarized as follows: (i) Which NLP tasks and applications can benefit from typology? (ii) What are the advantages and limitations of currently available typological databases? Can data-driven inference of typological features offer an alternative source of information? (iii) Which methods have been proposed to incorporate typological information in NLP systems, and how should such information be encoded? (iv) To what extent does the performance of typology-savvy methods surpass typology-agnostic baselines? How does typology compare with other criteria of language classification, such as genealogy? (v) How can typology be harnessed for data selection, rule-based systems, and model interpretation?
We start this survey with a brief overview of Linguistic Typology (§ 2) and multilingual NLP (§ 3). After these introductory sections we proceed to examine the development of typological information for NLP, including that in hand-crafted typological databases and that derived through automatic inference from linguistic data (§ 4). In the same section, we also describe typological features commonly selected for application in NLP. In § 5 we discuss ways in which typological information has been integrated into NLP algorithms, identifying the main trends and comparing the performance of a range of methods. Finally, in § 6 we discuss the current limitations in the use of typology in NLP and propose novel research directions inspired by our findings.
2. Overview of Linguistic Typology
There is no consensus on the precise number of languages in the world. For example, Glottolog provides the estimate of 7,748 (Hammarström et al. 2016), whereas Ethnologue (Lewis, Simons, and Fennig 2016) refers to 7,097.1 This is because defining what constitutes a ’language’ is in part arbitrary. Mutual intelligibility, which is used as the main criterion for including different language variants under the same label, is gradient in nature. Moreover, social and political factors play a role in the definition of language.
Linguistic Typology is the discipline that studies the variation among the world’s languages through their systematic comparison (Comrie 1989; Croft 2003). The comparison is challenging because linguistic categories cannot be predefined (Haspelmath 2007). Rather, cross-linguistically significant categories emerge inductively from the comparison of known languages, and are progressively refined with the discovery of new languages. Crucially, the comparison needs to be based on functional criteria, rather than formal criteria. Typologists distinguish between constructions, abstract and universal functions, and strategies, the type of expressions adopted by each language to codify a specific construction (Croft et al. 2017). For instance, the passive voice is considered a strategy that emphasizes the semantic role of patient: some languages lack this strategy and use other strategies to express the construction. For instance, Awtuw (Sepik family) simply allows for the subject to be omitted.
The classification of the strategies in each language is grounded in typological documentation (Bickel 2007, page 248). Documentation is empirical in nature and involves collecting texts or speech excerpts, and assessing the features of a language based on their analysis. The resulting information is stored in large databases (see § 4.1) of attribute–values (this pair is henceforth referred to as typological feature), where usually each attribute corresponds to a construction and each value to the most widespread strategy in a specific language.
Analysis of cross-lingual patterns reveals that cross-lingual variation is bounded and far from random (Greenberg 1966b). Indeed, typological features can be interdependent: The presence of one feature may implicate another (in one direction or both). This interdependence is called restricted universal, as opposed to unrestricted universals, which specify properties shared unconditionally by all languages. Such typological universals (restricted or not) are rarely absolute (i.e., exceptionless); rather, they are tendencies (Corbett 2010), hence they are called “statistical.” For example, consider this restricted universal: If a language (such as Hmong Njua, Hmong–Mien family) has prepositions, then genitive-like modifiers follow their head. If, instead, a language (such as Slavey, Na–Dené family) has postpositions, the order of heads and genitive-like modifiers is swapped. However, there are known exceptions: Norwegian (Indo–European) has prepositions but genitives precede their syntactic heads.2 Moreover, some typological features are rare whereas others are highly frequent. Interestingly, this also means that some languages are intuitively more plausible than others. Implications and frequencies of features are important, as they unravel the deeper explanatory factors underlying the patterns of cross-linguistic variation (Dryer 1998).
Cross-lingual variation can be found at all levels of linguistic structure. The seminal works on Linguistic Typology were concerned with morphosyntax, mainly morphological systems (Sapir 2014 [1921], page 128) and word order (Greenberg 1966b). This level of analysis deals with the form of meaningful elements (morphemes and words) and their combination, hence it is called structural typology. As an example, consider the alignment of the nominal case system (Dixon 1994): Some languages like Nenets (Uralic) use the same case for subjects of both transitive and intransitive verbs, and a different one for objects (nominative–accusative alignment). Other languages like Lezgian (Northeast Caucasian) group together intransitive subjects and objects, and treat transitive subjects differently (ergative–absolutive alignment).
On the other hand, semantic typology studies languages at the semantic and pragmatic levels. This area was pioneered by anthropologists interested in kinship (d’Andrade 1995) and colors (Berlin and Kay 1969), and was expanded by studies on lexical classes (Dixon 1977). The main focus of semantic typology has been to categorize languages in terms of concepts (Evans 2011) in the lexicon, in particular with respect to the 1) granularity, 2) division (boundary location), and 3) membership criteria (grouping and dissection). For instance, consider the event expressed by to open (something). It lacks a precise equivalent in languages such as Korean, where similar verbs overlap in meaning only in part (Bowerman and Choi 2001). For instance, ppaeda means ‘to remove an object from tight fit’ (used, e.g., for drawers) and pyeolchida means ‘to spread out a flat thing’ (used, e.g., for hands). Moreover, in most expressions, the English verb encodes the resulting state of the event, whereas an equivalent verb in another language such as Spanish (abrir) rather expresses the manner of the event (Talmy 1991). Although variation in the categories is pervasive due to their partly arbitrary nature, it is constrained cross-lingually via shared cognitive constraints (Majid et al. 2007).
Similarities between languages do not always arise from language-internal dynamics but also from external factors. In particular, similarities can be inherited from a common ancestor (genealogical bias) or borrowed by contact with a neighbor (areal bias) (Bakker 2010). Owing to genealogical inheritance, there are features that are widespread within a family but extremely rare elsewhere (e.g., the presence of click phonemes in the Khoisan languages). As an example of geographic percolation, most languages in the Balkan area (Albanian, Bulgarian, Macedonian, Romanian, Torlakian) have developed, even without a common ancestor, a definite article that is put after its noun simply because of their close proximity.
Research in linguistic typology has sought to disentangle such factors and to integrate them into a single framework aimed at answering the question “what’s where why?” (Nichols 1992). Language can be viewed as a hybrid biological and cultural system. The two components co-evolved in a twin track, developing partly independently and partly via mutual interaction (Durham 1991). The causes of cross-lingual variation can therefore be studied from two complementary perspectives—from the perspective of functional theories or event-based theories (Bickel 2015). The former theories involve cognitive and communicative principles (internal factors) and account for the origin of variation, whereas the latter ones emphasize the imitation of patterns found in other languages (external factors) and account for the propagation (or extinction) of typological features (Croft 1995, 2000).
Examples of functional principles include factors associated with language use, such as the frequency or processing complexity of a pattern (Cristofaro and Ramat 1999). Patterns that are easy or widespread become integrated into the grammar (Haspelmath 1999, inter alia). On the other hand, functional principles allow the speakers to draw similar inferences from similar contexts, leading to locally motivated pathways of diachronic change through the process known as grammaticalization (Greenberg 1966a, 1978; Bybee 1988). For instance, in the world’s languages (including English) the future tense marker almost always originates from verbs expressing direction, duty, will, or attempt because they imply a future situation.
The diachronic and gradual origin of the changes in language patterns and the statistical nature of the universals explain why languages do not behave monolithically. Each language can adopt several strategies for a given construction and partly inconsistent semantic categories. In other words, typological patterns tend to be gradient. For instance, the semantics of grammatical and lexical categories can be represented on continuous multi-dimensional maps (Croft and Poole 2008). Bybee and McClelland (2005) have noted how this gradience resembles the patterns learned by connectionist networks (and statistical machine learning algorithms in general). In particular, they argue that such architectures are sensitive to both local (contextual) information and general patterns, as well as to their frequency of use, similarly to natural languages.
Typological documentation is limited by the fact that the evidence available for each language is highly unbalanced and many languages are not even recorded in a written form.3 However, large typological databases such as WALS (Dryer and Haspelmath 2013) nevertheless have an impressive coverage (syntactic features for up to 1,519 languages). Where such information can be usefully integrated in machine learning, it can provide an alternative form of guidance to manual construction of resources that are now largely lacking for low resource languages. We discuss the existing typological databases and the integration of their features into NLP models in sections 4 and 5.
3. Overview of Multilingual NLP
The scarcity of data and resources in many languages represents a major challenge for multilingual NLP. Many state-of-the-art methods rely on supervised learning, hence their performance depends on the availability of manually crafted data sets annotated with linguistic information (e.g., treebanks, parallel corpora) and/or lexical databases (e.g., terminology databases, dictionaries). Although similar resources are available for key tasks in a few well-researched languages, the majority of the world’s languages lack them almost entirely. This gap cannot be easily bridged: The creation of linguistic resources is a time-consuming process and requires skilled labor. Furthermore, the immense range of possible tasks and languages makes the aim of a complete coverage unrealistic.
One solution to this problem explored by the research community abandons the use of annotated resources altogether and instead focuses on unsupervised learning. This class of methods infers probabilistic models of the observations given some latent variables. In other words, it unravels the hidden structures within unlabeled text data. Although these methods have been used extensively for multilingual applications (Snyder and Barzilay 2008; Vulić, De Smet, and Moens 2011; Titov and Klementiev 2012, inter alia), their performance tends to lag behind the more linguistically informed supervised learning approaches (Täckström, McDonald, and Nivre 2013). Moreover, they have been rarely combined with typological knowledge. For these reasons, we do not review them in this section.
Other promising ways to overcome data scarcity include transferring models or data from resource-rich to resource-poor languages (§ 3.1) or learning joint models from annotated examples in multiple languages (§ 3.2) in order to leverage language interdependencies. Early approaches of this kind have relied on universal, high-level delexicalized features, such as part of speech (PoS) tags and dependency relations. More recently, however, the incompatibility of (language-specific) lexica has been countered by mapping equivalent words into the same multilingual semantic space through representation learning (§ 3.3). This has enriched language transfer and multilingual joint modeling with lexicalized features. In this section, we provide an overview of these methods, as they constitute the backbone of the typology-savvy algorithms surveyed in § 5.
3.1 Language Transfer
Linguistic information can be transferred from resource-rich languages to resource-poor languages; these are commonly referred to as source languages and target languages, respectively. Language transfer is challenging, as it requires us to match word sequences with different lexica and word orders, or syntactic trees with different (anisomorphic) structures (Ponti et al. 2018a). As a consequence, the information obtained from the source languages typically needs to be adapted, by tailoring it to the properties of the target languages. The methods developed for language transfer include annotation projection, (de)lexicalized model transfer, and translation (Agić et al. 2014). We illustrate them here using dependency parsing as an example.
Annotation projection was introduced in the seminal work of Yarowsky, Ngai, and Wicentowski (2001) and Hwa et al. (2005). In its original formulation, as illustrated in Figure 1(a), a source text is parsed and word-aligned with a target parallel raw text. Its annotation (e.g., PoS tags and dependency trees) is then projected directly between corresponding words and used to train a supervised model on the target language. Later refinements to this process are known as soft projection, where constraints can be used to complement alignment, based on distributional similarity (Das and Petrov 2011) or constituent membership (Padó and Lapata 2009). Moreover, source model expectations on labels (Wang and Manning 2014; Agić et al. 2016) or sets of most likely labels (Khapra et al. 2011; Wisniewski et al. 2014) can be projected instead of single categorical labels. These can constrain unsupervised models by reducing the divergence between the expectations on target labels and on source labels or supporting “ambiguous learning” on the target language, respectively.
Model transfer instead involves training a model (e.g., a parser) on a source language and applying it on a target language (Zeman and Resnik 2008), as shown in Figure 1(b). Due to their incompatible vocabularies, models are typically delexicalized prior to transfer and take language-independent (Nivre et al. 2016) or harmonized (Zhang et al. 2012) features as input. In order to bridge the vocabulary gap, model transfer was later augmented with multilingual Brown word clusters (Täckström, McDonald, and Uszkoreit 2012) or multilingual distributed word representations (see § 3.3).
Machine translation offers an alternative to lexicalization in absence of annotated parallel data. As shown in Figure 1(c), a source sentence is machine translated into a target language (Banea et al. 2008), or through a bilingual lexicon (Durrett, Pauls, and Klein 2012). Its annotation is then projected and used to train a target-side supervised model. Translated documents can also be used to generate multilingual sentence representations, which facilitate language transfer (Zhou, Wan, and Xiao 2016).
Some of these methods are hampered by their resource requirements. In fact, annotation projection and translation need parallel texts to align words and train translation systems, respectively (Agić, Hovy, and Søgaard 2015). Moreover, comparisons of state-of-the-art algorithms revealed that model transfer is competitive with machine translation in terms of performance (Conneau et al. 2018). Partly owing to these reasons, typological knowledge has been mostly harnessed in connection with model transfer, as we discuss in § 5.2. Moreover, typological features can guide the selection of the best source language to match to a target language for language transfer (Agić et al. 2016, inter alia), which benefits all the above-mentioned methods (see § 5.3).
3.2 Multilingual Joint Supervised Learning
NLP models can be learned jointly from the data in multiple languages. In addition to facilitating intrinsically multilingual applications, such as Neural Machine Translation and Information Extraction, this approach often surpasses language-specific monolingual models, as it can leverage more (although noisier) data (Ammar et al. 2016, inter alia). This is particularly true in scenarios where either a target or all languages are resource-lean (Khapra et al. 2011) or in code-switching scenarios (Adel, Vu, and Schultz 2013). In fact, multilingual joint learning improves over pure model transfer also in scenarios with limited amounts of labeled data in target language(s) (Fang and Cohn 2017).4
A key strategy for multilingual joint learning is parameter sharing (Johnson et al. 2017). More specifically, in state-of-the-art neural architectures, input and hidden representations can be either private (language-specific) or shared across languages. Shared representations are the result of tying the parameters of a network component across languages, such as word embeddings (Guo et al. 2016), character embeddings (Yang, Salakhutdinov, and Cohen 2016), hidden layers (Duong et al. 2015b), or the attention mechanism (Pappas and Popescu-Belis 2017). Figure 2 shows an example where all the components of a PoS tagger are shared between two languages (Bambara on the left and Warlpiri on the right). Parameter sharing, however, does not necessarily imply parameter identity: It can be enforced by minimizing the distance between parameters (Duong et al. 2015a) or between latent representations of parallel sentences (Niehues et al. 2011; Zhou et al. 2015) in separate language-specific models.
Another common strategy in multilingual joint modeling is providing information about the properties of the language of the current text in the form of input language vectors (Guo et al. 2016). The intuition is that this helps tailoring the joint model toward specific languages. These vectors can be learned end-to-end in neural language modeling tasks (Tsvetkov et al. 2016; Östling and Tiedemann 2017) or neural machine translation tasks (Ha, Niehues, and Waibel 2016; Johnson et al. 2017). Ammar et al. (2016) instead used language vectors as a prior for language identity or typological features.
In § 5.2, we discuss ways in which typological knowledge is used to balance private and shared neural network components and provide informative input language vectors. In § 6.3, we argue that language vectors do not need to be limited to features extracted from typological databases, but should also include automatically induced typological information (Malaviya, Neubig, and Littell 2017, see § 4.3).
3.3 Multilingual Representation Learning
The multilingual algorithms reviewed in § 3.1 and § 3.2 are facilitated by dense real-valued vector representations of words, known as multilingual word embeddings. These can be learned from corpora and provide pivotal lexical features to several downstream NLP applications. In multilingual word embeddings, similar words (regardless of the actual language) obtain similar representations. Various methods to generate multilingual word embeddings have been developed. We follow the classification proposed by Ruder (2018), and we refer the reader to Upadhyay et al. (2016) for an empirical comparison.
Monolingual mapping generates independent monolingual representations and subsequently learns a linear map between a source language and a target language based on a bilingual lexicon (Mikolov, Le, and Sutskever 2013) or in an unsupervised fashion through adversarial networks (Conneau et al. 2017). Alternatively, both spaces can be cast into a new, lower-dimensional space through canonical correlation analysis based on dictionaries (Ammar et al. 2016) or word alignments (Guo et al. 2015).
Pseudo-cross-lingual approaches merge words with contexts of other languages and generate representations based on this mixed corpus. Substitutions are based on Wiktionary (Xiao and Guo 2014) or machine translation (Gouws and Søgaard 2015; Duong et al. 2016). Moreover, the mixed corpus can be produced by randomly shuffling words between aligned documents in two languages (Vulić and Moens 2015).
Cross-lingual training approaches jointly learn embeddings from parallel corpora and enforce cross-lingual constraints. This involves minimizing the distance of the hidden sentence representations of the two languages (Hermann and Blunsom 2014) or decoding one from the other (Lauly, Boulanger, and Larochelle 2013), possibly adding a correlation term to the loss (Chandar et al. 2014).
Joint optimization typically involves learning distinct monolingual embeddings, while enforcing cross-lingual constraints. These can be based on alignment-based translations (Klementiev, Titov, and Bhattarai 2012), cross-lingual word contexts (Luong, Pham, and Manning 2015), the average representations of parallel sentences (Gouws, Bengio, and Corrado 2015), or images (Rotman, Vulić, and Reichart 2018).
In this section, we have briefly outlined the most widely used methods in multilingual NLP. Although they offer a solution to data scarcity, cross-lingual variation remains a challenge for transferring knowledge across languages or learning from several languages simultaneously. Typological information offers promising ways to address this problem. In particular, we have noted that it can support model transfer, parameter sharing, and input biasing through language vectors. In the next two sections, we elaborate on these solutions. In particular, we review the development of typological information and the specific features that are selected for various NLP tasks (§ 4). Afterward, we discuss ways in which these features are integrated in NLP algorithms, for which applications they have been harnessed, and whether they truly benefit system performance (§ 5).
4. Selection and Development of Typological Information
In this section we first present major publicly available typological databases and then discuss how typological information relevant to NLP models is selected, pre-processed, and encoded. Finally, we highlight some limitations of database documentation with respect to coverage and feature granularity, and discuss how missing and finer-grained features can be obtained automatically.
4.1 Hand-Crafted Documentation in Typological Databases
Typological databases are created manually by linguists. They contain taxonomies of typological features, their possible values, as well as the documentation of feature values for the world’s languages. Major typological databases, listed in Table 1, typically organize linguistic information in terms of universal features and language-specific values. For example, Figure 3 presents language-specific values for the feature number of grammatical genders for nouns on a world map. Note that each language is color-coded according to its value. Further examples for each database can be found in the rightmost column of Table 1.
Name . | Levels . | Coverage . | Feature Example . |
---|---|---|---|
World Atlas of Language Structures (WALS) | Phonology, Morphosyntax, Lexical semantics | 2,676 languages; 192 attributes; 17% values covered | Order of Object and Verb Amele: OV (713) Gbaya Kara: VO (705) |
Atlas of Pidgin and Creole Language Structures (APiCS) | Phonology, Morphosyntax | 76 languages; 335 attributes | Tense–aspect systems Ternate Chabacano: purely aspectual (10) Afrikaans: purely temporal (1) |
URIEL Typological Compendium | Phonology, Morphosyntax, Lexical semantics | 8,070 languages; 284 attributes; ∼439,000 values | Case is prefix Berber (Middle Atlas): yes (38) Hawaaian: no (993) |
Syntactic Structures of the World’s Languages (SSWL) | Morphosyntax | 262 languages; 148 attributes; 45% values covered | Standard negation is suffix Amharic: yes (21) Laal: no (170) |
AUTOTYP | Morphosyntax | 825 languages; ∼1,000 attributes | Presence of clusivity !Kung (Ju): false Ik (Kuliak): true |
Valency Patterns Leipzig (ValPaL) | Predicate–argument structures | 36 languages; 80 attributes; 1,156 values | to laugh Mandinka: 1 > V Sliammon: V.sbj[1] 1 |
Lyon–Albuquerque Phonological Systems Database (LAPSyD) | Phonology | 422 languages; ∼70 attributes | ɗ and ʈ Sindhi: yes (1) Chuvash: no (421) |
PHOIBLE Online | Phonology | 2,155 languages; 2,160 attributes | m Vietnamese: yes (2053) Pirahã: no (102) |
StressTyp2 | Phonology | 699 languages; 927 attributes | stress on first syllable Koromfé: yes (183) Cubeo: no (516) |
World Loanword Database (WOLD) | Lexical semantics | 41 languages; 24 attributes; ∼2,000 values | horse Quechua: kaballu borrowed (24) Sakha: sɨlgɨ no evidence (18) |
Intercontinental Dictionary Series (IDS) | Lexical semantics | 329 languages; 1,310 attributes | world Russian: mir Tocharian A: ārkiśosṣi |
Automated Similarity Judgment Program (ASJP) | Lexical semantics | 7,221 languages; 40 attributes | I Ainu Maoka: co7okay Japanese: watashi |
Name . | Levels . | Coverage . | Feature Example . |
---|---|---|---|
World Atlas of Language Structures (WALS) | Phonology, Morphosyntax, Lexical semantics | 2,676 languages; 192 attributes; 17% values covered | Order of Object and Verb Amele: OV (713) Gbaya Kara: VO (705) |
Atlas of Pidgin and Creole Language Structures (APiCS) | Phonology, Morphosyntax | 76 languages; 335 attributes | Tense–aspect systems Ternate Chabacano: purely aspectual (10) Afrikaans: purely temporal (1) |
URIEL Typological Compendium | Phonology, Morphosyntax, Lexical semantics | 8,070 languages; 284 attributes; ∼439,000 values | Case is prefix Berber (Middle Atlas): yes (38) Hawaaian: no (993) |
Syntactic Structures of the World’s Languages (SSWL) | Morphosyntax | 262 languages; 148 attributes; 45% values covered | Standard negation is suffix Amharic: yes (21) Laal: no (170) |
AUTOTYP | Morphosyntax | 825 languages; ∼1,000 attributes | Presence of clusivity !Kung (Ju): false Ik (Kuliak): true |
Valency Patterns Leipzig (ValPaL) | Predicate–argument structures | 36 languages; 80 attributes; 1,156 values | to laugh Mandinka: 1 > V Sliammon: V.sbj[1] 1 |
Lyon–Albuquerque Phonological Systems Database (LAPSyD) | Phonology | 422 languages; ∼70 attributes | ɗ and ʈ Sindhi: yes (1) Chuvash: no (421) |
PHOIBLE Online | Phonology | 2,155 languages; 2,160 attributes | m Vietnamese: yes (2053) Pirahã: no (102) |
StressTyp2 | Phonology | 699 languages; 927 attributes | stress on first syllable Koromfé: yes (183) Cubeo: no (516) |
World Loanword Database (WOLD) | Lexical semantics | 41 languages; 24 attributes; ∼2,000 values | horse Quechua: kaballu borrowed (24) Sakha: sɨlgɨ no evidence (18) |
Intercontinental Dictionary Series (IDS) | Lexical semantics | 329 languages; 1,310 attributes | world Russian: mir Tocharian A: ārkiśosṣi |
Automated Similarity Judgment Program (ASJP) | Lexical semantics | 7,221 languages; 40 attributes | I Ainu Maoka: co7okay Japanese: watashi |
Some databases store information pertaining to multiple levels of linguistic description. These include WALS (Dryer and Haspelmath 2013) and the Atlas of Pidgin and Creole Language Structures (APiCS) (Michaelis et al. 2013). Among all presently available databases, WALS has been the most widely used in NLP. In this resource, which has 142 typological features in total, features 1–19 deal with phonology, 20–29 with morphology, 30–57 with nominal categories, 58–64 with nominal syntax, 65–80 with verbal categories, 81–97 and 143–144 with word order, 98–121 with simple clauses, 122–128 with complex sentences, 129–138 with the lexicon, and 139–142 with other properties.
Other databases only cover features related to a specific level of linguistic description. For example, both Syntactic Structures of the World’s Languages (SSWL) (Collins and Kayne 2009) and AUTOTYP (Bickel et al. 2017) focus on syntax. SSWL features are manually crafted, whereas AUTOTYP features are derived automatically from primary liguistic data using scripts. The Valency Patterns Leipzig (ValPaL) (Hartmann, Haspelmath, and Taylor 2013) provides verbs as attributes and predicate–argument structures as their values (including both valency and morphosyntactic constraints). For example, in both Mandinka and Sliammon, the verb to laugh has a valency of 1; in other words, it requires only one mandatory argument, the subject. In Mandinka the subject precedes the verb, but there is no agreement requirement; in Sliammon, on the other hand, the word order does not matter, but the verb is required to morphologically agree with the subject.
For phonology, the Phonetics Information Base and Lexicon (PHOIBLE) (Moran, McCloy, and Wright 2014) collates information on segments (binary phonetic features). In the Lyon–Albuquerque Phonological Systems Database (LAPSyD) (Maddieson et al. 2013), attributes are articulatory traits, syllabic structures, or tonal systems. Finally, StressTyp2 (Goedemans, Heinz, and der Hulst 2014) deals with stress and accent patterns. For instance, in Koromfé each word’s first syllable has to be stressed, but not in Cubeo.
Other databases document various aspects of semantics. The World Loanword Database (WOLD) (Haspelmath and Tadmor 2009) documents loanwords by identifying the donor languages and the source words. The Automated Similarity Judgment Program (ASJP) (Wichmann, Holman, and Brown 2016) and the Intercontinental Dictionary Series (IDS) (Key and Comrie 2015) indicate how a meaning is lexicalized across languages: For example, the concept of world is expressed as mir in Russian, and as ārkiśosṣi in Tocharian A.
Although typological databases store abundant information on many languages, they suffer from shortcomings that limit their usefulness. Perhaps the most significant shortcoming of such resources is their limited coverage. In fact, feature values are missing for most languages in most databases. Other shortcomings are related to feature granularity. In particular, most databases fail to account for feature value variation within each language: They report only majority value rather than the full range of possible values and their corresponding frequencies. For example, the dominant adjective–noun word order in Italian is adjective before noun; however, the opposite order is also attested. The latter information is often missing from typological databases.
Further challenges are posed by restricted feature applicability and feature hierarchies. Firstly, some features apply, by definition, only to subsets of languages that share another feature value. For instance, WALS feature 113A documents “Symmetric and Asymmetric Standard Negation,” whereas WALS feature 114A “Subtypes of Asymmetric Standard Negation.” Although a special NA value is assigned for symmetric-negation languages in the latter, there are cases where languages without the prerequisite feature are simply omitted from the sample. Secondly, features can be partially redundant, and subsume other features. For instance, WALS feature 81A “Order of Subject, Object and Verb” encodes the same information as WALS feature 82A “Order of Subject and Verb” and 83A “Order of Object and Verb,” with the addition of the order of subject and object.
4.2 Feature Selection from Typological Databases
The databases presented above can serve as a rich source of typological information for NLP. In this section, we survey the feature sets that have been extracted from these databases in typologically informed NLP studies. In § 5.4, we review in which ways and to what degree of success these features have been integrated in machine learning algorithms.
Most NLP studies informed by typology only incorporated a subset of word order features from WALS (Dryer and Haspelmath 2013). Most of these studies focused on the task of syntactic dependency parsing, where word order provides crucial guidance (Naseem, Barzilay, and Globerson 2012), using the feature subsets shown in Figure 4. As depicted in the figure, these studies utilized quite similar word order features. The feature set first established by Naseem, Barzilay, and Globerson (2012) served as inspiration for subsequent works. The main differences in these sets results from the practice of discarding features that are not discriminative, when they are identical for all the languages in the sample.
Another group of studies used more comprehensive feature sets. The feature set of Daiber, Stanojević, and Sima’an (2016) included not only WALS word order features but also nominal categories (e.g., “Conjunctions and Universal Quantifiers”) and nominal syntax (e.g., “Possessive Classification”). Berzak, Reichart, and Katz (2015) considered all features from WALS associated with morphosyntax and pruned out the redundant ones, resulting in a total of 119 features. Søgaard and Wulff (2012) utilized all the features in WALS with the exception of phonological features. Tsvetkov et al. (2016) selected 190 binarized phonological features from URIEL (Littel, Mortensen, and Levin 2016). These features encoded the presence of single segments, classes of segments, minimal contrasts in a language inventory, and the number of segments in a class. For instance, they record whether a language allows two sounds to differ only in voicing, such as /t/ and /d/.
Finally, a small number of experiments adopted the entire feature inventory of typological databases, without any sort of pre-selection. In particular, Agić (2017) and Ammar et al. (2016) extracted all the features in WALS, whereas Deri and Knight (2016) extracted all the features in URIEL. Schone and Jurafsky (2001) did not resort to basic typological features, but rather to “several hundred [implicational universals] applicable to syntax” drawn from the Universal Archive (Plank and Filiminova 1996).
Typological attributes that are extracted from typological databases are typically represented as feature vectors in which each dimension encodes a feature value. This feature representation is often binarized (Georgi, Xia, and Lewis 2010): For each possible value v of each database attribute a, a new feature is created with value 1 if it corresponds to the actual value for a specific language and 0 otherwise. Note that this increases the number of features by a factor of ||vai||. Although binarization helps harmonizing different features and different databases, it muddies the different types of typological variables.
To what extent do the limitations of typological databases mentioned in § 4.1 affect the feature sets surveyed in this section? The coverage is generally broad for the languages used in these experiments, as they tend to be well-documented. For instance, on average, 79.8% of the feature values are populated for the 14 languages appearing in Berzak, Reichart, and Katz (2015), as opposed to 17% for all the languages in WALS.
It is hard to assess at a large scale how informative a set of typological features is. However, these can be meaningfully compared with genealogical information. Ideally, these two properties should not be completely equivalent (otherwise they would be redundant),5 but at the same time they should partly overlap (language cognates inherit typological properties from the same ancestors). In Figure 5, we show two feature sets appearing in Ammar et al. (2016), each depicted as a heatmap. Each row represents a language in the data; each cell is colored according to the feature value, ranging from 0 to 1. In particular, the feature set of Figure 5(a) is the subset of word order features listed in Figure 4; and Figure 5(b) is a large set of WALS features, where values are averaged by language genus to fill in missing values.
In order to compare the similarities of the typological feature vectors among languages, we clustered languages hierarchically based on such vectors.6 Intuitively, the more this hierarchy resembles their actual family tree, the more redundant the typological information is. This is the case for Figure 5(b), where the lowest-lever clusters correspond exactly to a genus or family (top–down: Romance, Slavic, Germanic, Celtic, Uralic). Still, the language vectors belonging to the same cluster display some micro-variations in individual features. On the other hand, 5(a) shows clusters differing from language genealogy: for instance, English and Czech are merged, although they belong to different genera (Germanic and Slavic). However, this feature set fails to account for fine-grained differences among related languages: For instance, French, Spanish, and Italian receive the same encoding.7
To sum up, this section’s survey on typological feature sets reveals that most experiments have taken into account a small number of databases and features therein. However, several studies did utilize a larger set of coherent features or full databases. Although well-documented languages do not suffer much from coverage issues, we showed how difficult it is to select typological features that are non-redundant with genealogy, fully discriminative, and informative. The next section addresses these problems, proposing automatic prediction as a solution.
4.3 Automatic Prediction of Typological Features
The partial coverage and coarse granularity of existing typological resources sparked a line of research on automatic acquisition of typological information. Missing feature values can be predicted based on: i) heuristics from morphosyntactic annotation that pre-exists, such as in treebanks, or is transferred from aligned texts (§ 4.3.1); ii) unsupervised propagation from other values in a database based on clustering or language similarity metrics (§ 4.3.2); iii) supervised learning with Bayesian models or artificial neural networks (§ 4.3.3); or iv) heuristics based on co-occurrence metrics, typically applied to multi-parallel texts (§ 4.3.4). These strategies are summarized in Table 2.
. | Author . | Details . | Requirements . | Languages . | Features . |
---|---|---|---|---|---|
Morphosyntactic Annotation | Liu (2010) | Treebank count | Treebank | 20 | word order |
Lewis and Xia (2008) | IGT projection | IGT, source chunker | 97 | word and morpheme order, determiners | |
Bender et al. (2013) | IGT projection | IGT, source chunker | 31 | word order and case alignment | |
Östling (2015) | Treebank projection | Parallel text, source tagger and parser | 986 | word order | |
Zhang et al. (2016) | PoS projection | source tagger, seed dictionary | 6 | word order | |
Unsupervised Propagation | Teh, Daumé III, and Roy (2007) | Hierarchical typological cluster | WALS | 2,150 | whole |
Georgi, Xia, and Lewis (2010) | Majority value from k-means typological cluster | WALS | whole | whole | |
Coke, King, and Radev (2016) | Majority value from genus | Genealogy and WALS | 325 | word order and passive | |
Littel, Mortensen, and Levin (2016) | Family, area, and typology-based Nearest Neighbors | Genealogy and WALS | whole | whole | |
Berzak, Reichart, and Katz (2014) | English as a Second Language–based Nearest Neighbors | ESL texts | 14 | whole | |
Malaviya, Neubig, and Littell (2017) | Task-based language vector | NMT data set | 1,017 | whole | |
Bjerva and Augenstein (2018) | Task-based language vector | PoS tag data set | 27,824 | phonology, morphology, syntax | |
Supervised Learning | Takamura, Nagata, and Kawasaki (2016) | Logistic regression | WALS | whole | whole |
Murawaki (2017) | Bayesian + feature and language interactions | Genealogy and WALS | 2,607 | whole | |
Wang and Eisner (2017) | Feed-forward Neural Network | WALS, tagger, synthetic treebanks | 37 | word order | |
Cotterell and Eisner (2017) | Determinant Point Process with neural features | WALS | 200 | vowel inventory | |
Daumé III and Campbell (2007) | Implication universals | Genealogy and WALS | whole | whole | |
Lu (2013) | Automatic discovery | Genealogy and WALS | 1,646 | word order | |
Cross-lingual distribution | Wälchli and Cysouw (2012) | Sentence edit distance | Multi-parallel texts, pivot | 100 | motion verbs |
Asgari and Schütze (2017) | Pivot alignment | Multi-parallel texts, pivot | 1,163 | tense markers | |
Roy et al. (2014) | Correlations in counts and entropy | None | 23 | adposition word order |
. | Author . | Details . | Requirements . | Languages . | Features . |
---|---|---|---|---|---|
Morphosyntactic Annotation | Liu (2010) | Treebank count | Treebank | 20 | word order |
Lewis and Xia (2008) | IGT projection | IGT, source chunker | 97 | word and morpheme order, determiners | |
Bender et al. (2013) | IGT projection | IGT, source chunker | 31 | word order and case alignment | |
Östling (2015) | Treebank projection | Parallel text, source tagger and parser | 986 | word order | |
Zhang et al. (2016) | PoS projection | source tagger, seed dictionary | 6 | word order | |
Unsupervised Propagation | Teh, Daumé III, and Roy (2007) | Hierarchical typological cluster | WALS | 2,150 | whole |
Georgi, Xia, and Lewis (2010) | Majority value from k-means typological cluster | WALS | whole | whole | |
Coke, King, and Radev (2016) | Majority value from genus | Genealogy and WALS | 325 | word order and passive | |
Littel, Mortensen, and Levin (2016) | Family, area, and typology-based Nearest Neighbors | Genealogy and WALS | whole | whole | |
Berzak, Reichart, and Katz (2014) | English as a Second Language–based Nearest Neighbors | ESL texts | 14 | whole | |
Malaviya, Neubig, and Littell (2017) | Task-based language vector | NMT data set | 1,017 | whole | |
Bjerva and Augenstein (2018) | Task-based language vector | PoS tag data set | 27,824 | phonology, morphology, syntax | |
Supervised Learning | Takamura, Nagata, and Kawasaki (2016) | Logistic regression | WALS | whole | whole |
Murawaki (2017) | Bayesian + feature and language interactions | Genealogy and WALS | 2,607 | whole | |
Wang and Eisner (2017) | Feed-forward Neural Network | WALS, tagger, synthetic treebanks | 37 | word order | |
Cotterell and Eisner (2017) | Determinant Point Process with neural features | WALS | 200 | vowel inventory | |
Daumé III and Campbell (2007) | Implication universals | Genealogy and WALS | whole | whole | |
Lu (2013) | Automatic discovery | Genealogy and WALS | 1,646 | word order | |
Cross-lingual distribution | Wälchli and Cysouw (2012) | Sentence edit distance | Multi-parallel texts, pivot | 100 | motion verbs |
Asgari and Schütze (2017) | Pivot alignment | Multi-parallel texts, pivot | 1,163 | tense markers | |
Roy et al. (2014) | Correlations in counts and entropy | None | 23 | adposition word order |
With the exception of Naseem, Barzilay, and Globerson (2012), who treated typological information as a latent variable, automatically acquired typological features have not been integrated into algorithms for NLP applications to date. However, they have several advantages over manually crafted features. Unsupervised propagation and supervised learning fill in missing values in databases, thereby extending their coverage. Moreover, heuristics based on morphosyntactic annotation and co-occurrence metrics extract additional information that is not recorded in typological databases. Further, they can account for the distribution of feature values within single languages, rather than just the majority value. Finally, they do not make use of discrete cross-lingual categories to compare languages; rather, language properties are reflected in continuous representations, which is in line with their gradient nature (see § 2).
4.3.1 Heuristics Based on Morphosyntactic Annotation.
Morphosyntactic feature values can be extracted via heuristics from morphologically and syntactically annotated texts. For example, word order features can be calculated by counting the average direction of dependency relations or constituency hierarchies (Liu 2010). Consider the tree of a sentence in Welsh from Bender et al. (2013) in Figure 6. The relative order of verb–subject, and verb–object can be deduced from the position of the relevant nodes VBD, NNS, and NNO (highlighted).
Morphosyntactic annotation is often unavailable for resource-lean languages. In such cases, it can be projected from a source language to a target language through language transfer. For instance, Östling (2015) projects source morphosyntactic annotation directly to several languages through a multilingual word alignment. After the alignment and projection, word order features are calculated by the average direction of dependency relations. Similarly, Zhang et al. (2016) transfer PoS annotation with a model transfer technique relying on multilingual embeddings, created through monolingual mapping (see § 3.3). After the projection, they predict feature values with a multiclass support vector machine using PoS tag n-gram features.
Finally, typological information can be extracted from Interlinear Glossed Texts (IGT). Such collections of example sentences are collated by linguists and contain grammatical glosses with morphological information. These can guide alignment between the example sentence and its English translation. Lewis and Xia (2008) and Bender et al. (2013) project chunking information from English and train context free grammars on target languages. After collapsing identical rules, they arrange them by frequency and infer word order features.
4.3.2 Unsupervised Propagation.
Another line of research seeks to increase the coverage of typological databases borrowing missing values from the known values in other languages. One approach is clustering languages according to some criterion and propagating the majority value within each cluster. Hierarchical clusters can be created either according to typological features (e.g., Teh, Daumé III, and Roy 2007) or based on language genus (Coke, King, and Radev 2016). Through extensive evaluation, Georgi, Xia, and Lewis (2010) demonstrate that typology based clustering outperforms genealogical clustering for unsupervised propagation of typological features. Among the clustering techniques examined, k-means appears to be the most reliable as compared to k-medoids, the Unweighted Pair Group Method with Arithmetic mean, repeated bisection, and hierarchical methods with partitional clusters.
Language similarity measures can also rely on a distributed representation of each language. These language vectors are trained end-to-end as part of neural models for downstream tasks such as many-to-one Neural Machine Translation (NMT). In particular, language vectors can be obtained from artificial trainable tokens concatenated to every input sentence, similar to Johnson et al. (2017), or from the aggregated values of the hidden states of a neural encoder. Using these language representations, typological feature values are propagated using k nearest neighbors (Bjerva and Augenstein 2018) or predicted with logistic regression (Malaviya, Neubig, and Littell 2017).
Language vectors can be conceived as data-driven, continuous typological representations of a language, and as such provide an alternative to manually crafted typological representations. Similar to the analysis carried out in § 4.2, we can investigate how much language vectors align with genealogical information. Figure 7 compares continuous representations based on artificial tokens (Figure 7(a)) and encoder hidden states (Figure 7(b)) with vectors of discrete WALS features from URIEL (Figure 7(c)). All the representations are reduced to two dimensions with t-Distributed Stochastic Neighbor Embedding (t-SNE), and color-coded based on their language family.
As the plots demonstrate, the information encoded in WALS vectors is akin to genealogical information, partly because of biases introduced by family-based propagation of missing values (Littel, Mortensen, and Levin 2016) (see § 4.3.2). On the other hand, artificial tokens and encoder hidden states cannot be reduced to genealogical clusters. Yet, their ability to predict missing values is not inferior to WALS features (as detailed in § 4.3.5). This implies that discrete and continuous representations appear to capture different aspects of the cross-lingual variation, while both being informative. For this reason, they are possibly complementary and could be combined in the future.
4.3.3 Supervised Learning.
As an alternative to unsupervised propagation, one can learn an explicit model for predicting feature values through supervised classification. For instance, Takamura, Nagata, and Kawasaki (2016) use logistic regression with WALS features and evaluate this model in a cross-validation setting where one language is held out in each fold. Wang and Eisner (2017) provide supervision to a feed-forward neural network with windows of PoS tags from natural and synthetic corpora.
Supervised learning of typology can also be guided by non-typological information (see § 2). Within the Bayesian framework, Murawaki (2017) exploits not only typological but also genealogical and areal dependencies among languages to represent each language as a binary latent parameter vector through a series of autologistic models. Cotterell and Eisner (2017, 2018) develop a point-process generative model of vowel inventories (represented as either IPA symbols or acoustic formants) based on some universal cognitive principles: dispersion (phonemes are as spread out as possible in the acoustic space) and focalization (some positions in the acoustic space are preferred due to the similarity of the main formants).
An alternative approach to supervised prediction of typology is based on learning implicational universals of the kind pioneered by Greenberg (1963), with probabilistic models from existing typological databases. Using such universals, features can be deduced by modus ponens. For instance, once it has been established that the presence of “High consonant/vowel ratio” and “No front-rounded vowels” implies “No tones,” the latter feature value can be deduced from the premises if those are known. Daumé III and Campbell (2007) propose a Bayesian model for learning typological universals that predicts implications between features based on the intuition that their likelihood does not equal their prior probability, but rather is constrained by other features. Lu (2013) casts this problem as knowledge discovery, where language features are encoded in a directed acyclic graph. The strength of implication universals is represented as weights associated with the edges of this graph.
4.3.4 Heuristics Based on Cross-Lingual Distributional Information.
Typological features can also emerge in a data-driven fashion, based on distributional information from multi-parallel texts. Wälchli and Cysouw (2012) create a matrix where each row is a parallel sentence, each column is a language, and cell values are lemmas of motion verbs occurring in those sentences. This matrix can be transformed to a (Hamming) distance matrix between sentence pairs, and reduced to lower dimensionality via multidimensional scaling. This provides a continuous map of lexical semantics that is language-specific, but motivated by categories that emerge across languages. For instance, Figure 8 shows the first two dimensions of the multidimensional scaled similarity matrix in Mapudungun, where the first dimension can be interpreted as reflecting the direction of motion.
Asgari and Schütze (2017) devised a procedure to obtain markers of grammatical features across languages. Initially, they manually select a language containing an unambiguous and overt marker for a specific typological feature (called head pivot) based on linguistic expertise. For instance, ti in Seychellois Creole (French Creole) is a head pivot for past-tense marking. Then, this marker is connected to equivalent markers in other languages through alignment-based χ2 tests in a multi-parallel corpus and n-gram counts.
Finally, typological features can be derived from raw texts in a completely unsupervised fashion, without multi-parallel texts. Roy et al. (2014) use heuristics to predict the order of adpositions and nouns. Adpositions are identified as the most frequent words. Afterward, the position of the noun is established based on whether selectional restrictions appear on the right context or the left context of the adposition, according to count-based and entropy-based metrics.
4.3.5 Comparison of the Strategies.
Establishing which of the above-mentioned strategies is optimal in terms of prediction accuracy is not straightforward. In Figure 9, we collect the scores reported by several of the surveyed papers, provided that they concern specific features or the whole WALS data set (as opposed to subsets) and are numerical (as opposed to graphical plots). However, these results are not strictly comparable, because language samples and/or the split of data partitions may differ. The lack of standardization in this respect allows us to draw conclusions only about the difficulty of predicting each feature relative to a specific strategy: For instance, the correct value of passive voice is harder to predict than word order, as claimed by Bender et al. (2013) and seen in Figure 9.
However, some papers carry out comparisons of the different strategies within the same experimental setting. According to Coke, King, and Radev (2016), propagation from the genus majority value outperforms logistic regression among word-order typological features. On the other hand, Georgi, Xia, and Lewis (2010) argue that typology-based clusters are to be preferred in general. This apparent contradiction stems from the nature of the target features: Genealogy excels in word order features because of their diachronic stability. As they tend to be preserved over time, they are often shared by all members of a family. In turn, majority value propagation is surpassed by supervised classification when evaluated on the entire WALS feature set (Takamura, Nagata, and Kawasaki 2016).
In general, there appears to be no “one-size-fits-all” algorithm. For instance, Coke, King, and Radev (2016) outperform Wang and Eisner (2017) for object–verb order (83A) but are inferior to it for adposition–noun (85A). In fact, each strategy is suited for different features, and requires different resources. Based on Figure 9, the extraction of information from morphosyntactic annotation is well suited for word order features, whereas distributional heuristics from multi-parallel texts are more informative about lexicalization patterns. On the other hand, unsupervised propagation and supervised learning are general-purpose strategies. Moreover, the first two presuppose some annotated and/or parallel texts, whereas the second two need pre-existing database documentation. Strategies may be preferred, according to which resources are available for a specific language.
Many strategies have a common weakness, however, as they postulate incorrectly that language samples are independent and identically distributed (Lu 2013; Cotterell and Eisner 2017). This is not the case, due to the interactions of family, area, and implicational universals. The solutions adopted to mitigate this weakness vary: Wang and Eisner (2017) balance the data distribution with synthetic examples, whereas Takamura, Nagata, and Kawasaki (2016) model family and area interactions explicitly. However, according to Murawaki (2017), these interactions have different degrees of impact on typological features. In particular, inter-feature dependencies are more influential than inter-language dependencies, and horizontal diffusibility (borrowing from neighbors) is more prominent than vertical stability (inheriting from ancestors).
Finally, a potential direction for future investigation emerges from this section’s survey. In addition to missing value completion, automatic prediction often also accounts for the variation internal to each language. However, some strategies go even further, and “open the way for a typology where generalizations can be made without there being any need to reduce the attested diversity of categorization patterns to discrete types” (Wälchli and Cysouw 2012). In fact, language vectors (Malaviya, Neubig, and Littell 2017; Bjerva and Augenstein 2018) and distributional information from multi-parallel texts (Asgari and Schütze 2017) are promising insofar they capture latent properties of languages in a bottom–up fashion, preserving their gradient nature. This offers an alternative to hand-crafted database features: In § 6.3 we make a case for integrating continuous, data-driven typological representations into NLP algorithms.
5. Uses of Typological Information in NLP Models
The typological features developed as discussed in § 4 are of significant importance for NLP algorithms. Particularly, they are used in three main ways. First, they can be manually converted into rules for expert systems (§ 5.1); second, they can be integrated into algorithms as constraints that inject prior knowledge or tie together specific parameters across languages (§ 5.2); and, finally, they can guide data selection and synthesis (§ 5.3). All of these approaches are summarized in Table 3 and described in detail in the following sections, with a particular focus on the second approach.
. | Author . | Details . | Number of Languages / Families . | Task . |
---|---|---|---|---|
Rules | Bender (2016) | Grammar generation | 12 / 8 | semantic parsing |
Feature engineering | Naseem, Barzilay, and Globerson (2012) | Generative | 17 / 10 | syntactic parsing |
Täckström, McDonald, and Nivre (2013) | Discriminative graph-based | 16 / 7 | syntactic parsing | |
Zhang and Barzilay (2015) | Discriminative tensor-based | 10 / 4 | syntactic parsing | |
Daiber, Stanojević, and Sima’an (2016) | One-to-many MLP | 22 / 5 | reordering for machine translation | |
Ammar et al. (2016) | Multi-lingual transition-based | 7 / 1 | syntactic parsing | |
Tsvetkov et al. (2016) | Phone-based polyglot language model | 9 / 4 | identification of lexical borrowings and speech synthesis | |
Schone and Jurafsky (2001) | Design of Bayesian network | 1 / 1 | word cluster labeling | |
Data Manipulation | Deri and Knight (2016) | Typology-based selection | 227 | grapheme to phoneme |
Agić (2017) | PoS divergence metric | 26 / 5 | syntactic parsing | |
Søgaard and Wulff (2012) | Typology-based weighing | 12 / 1 | syntactic parsing | |
Wang and Eisner (2017) | Word-order-based tree synthesis | 17 / 7 | syntactic parsing | |
Ponti et al. (2018a) | Construction-based tree preprocessing | 6 / 3 | machine translation, sentence similarity |
. | Author . | Details . | Number of Languages / Families . | Task . |
---|---|---|---|---|
Rules | Bender (2016) | Grammar generation | 12 / 8 | semantic parsing |
Feature engineering | Naseem, Barzilay, and Globerson (2012) | Generative | 17 / 10 | syntactic parsing |
Täckström, McDonald, and Nivre (2013) | Discriminative graph-based | 16 / 7 | syntactic parsing | |
Zhang and Barzilay (2015) | Discriminative tensor-based | 10 / 4 | syntactic parsing | |
Daiber, Stanojević, and Sima’an (2016) | One-to-many MLP | 22 / 5 | reordering for machine translation | |
Ammar et al. (2016) | Multi-lingual transition-based | 7 / 1 | syntactic parsing | |
Tsvetkov et al. (2016) | Phone-based polyglot language model | 9 / 4 | identification of lexical borrowings and speech synthesis | |
Schone and Jurafsky (2001) | Design of Bayesian network | 1 / 1 | word cluster labeling | |
Data Manipulation | Deri and Knight (2016) | Typology-based selection | 227 | grapheme to phoneme |
Agić (2017) | PoS divergence metric | 26 / 5 | syntactic parsing | |
Søgaard and Wulff (2012) | Typology-based weighing | 12 / 1 | syntactic parsing | |
Wang and Eisner (2017) | Word-order-based tree synthesis | 17 / 7 | syntactic parsing | |
Ponti et al. (2018a) | Construction-based tree preprocessing | 6 / 3 | machine translation, sentence similarity |
5.1 Rule-Based Systems
An interesting example of a rule-based system in our context is the Grammar Matrix kit, presented by Bender (2016), where rule-based grammars can be generated from typological features. These grammars are designed within the framework of Minimal Recursion Semantics (Copestake et al. 2005) and can parse a natural language input string into a semantic logical form.
The Grammar Matrix consists of a universal core grammar and language-specific libraries for phenomena where typological variation is attested. For instance, the module for coordination typology expects the specification of the kind, pattern, and position of a grammatical marking, as well as the phrase types it covers. For instance, the Ono language (Trans–New Guinea) expresses it with a lexical, monosyndetic, pre-nominal marker so in noun phrases. A collection of pre-defined grammars is available through the Language CoLLAGE initiative (Bender 2014).
5.2 Feature Engineering and Constraints
The most common usage of typological features in NLP is in feature engineering and constraint design for machine learning algorithms. Two popular approaches we consider here are language transfer with selective sharing, where the parameters of languages with similar typological features are tied together (§ 5.2.1), and joint multilingual learning, where typological information is used in order to bias models to reflect the properties of specific languages (see § 5.2.2).
5.2.1 Selective Sharing.
This framework was introduced by Naseem, Barzilay, and Globerson (2012) and was subsequently adopted by Täckström, McDonald, and Nivre (2013) and Zhang and Barzilay (2015). It aims at parsing sentences in a language transfer setting (see § 3.1) where there are multiple source languages and a single unobserved target language. It assumes that head–modifier relations between PoS pairs are universal, but the order of parts of speech within a sentence is language-specific. For instance, adjectives always modify nouns, but in Igbo (Niger–Congo) they linearly precede nouns, and in Nihali (isolate) they follow nouns. Leveraging this intuition, selective sharing models learn dependency relations from all source languages, while ordering is learned from typologically related languages only.
Täckström, McDonald, and Nivre (2013) proposed a discriminative version of the model, in order to amend the alleged limitations of the original generative variant. In particular, they dispose of the strong independence assumptions (e.g., between choice and ordering of modifiers) and invalid feature combinations. For instance, the WALS feature “Order of Subject, Verb, and Object” (81A) should be taken into account only when the head under consideration is a verb and the dependent is a noun, but in the generative model this feature was fed to g() regardless of the head–dependency pair. The method of Täckström, McDonald, and Nivre is a delexicalized, first-order, graph-based parser, based on a carefully selected feature set. From the set proposed by McDonald, Crammer, and Pereira (2005), they keep only (universal) features describing selectional preferences and dependency length. Moreover, they introduce (language-specific) features for the directionality of dependents, based on combinations of the PoS tags of the head and modifiers with corresponding WALS values.
The different approaches to selective sharing presented here explicitly deal with cases where the typological features do not match any of the source languages, which may lead learning astray. Naseem, Barzilay, and Globerson (2012) propose a variant of their algorithm where the typological features are not observed (in WALS), treating them as latent variables, and where model parameters are learned in an unsupervised fashion with the Expectation Maximization algorithm (Dempster, Laird, and Rubin 1977). Täckström, McDonald, and Nivre (2013) tackle the same problem from the side of ambiguous learning. The discriminative model on the target language is trained on sets of automatically predicted ambiguous labels . Finally, Zhang and Barzilay (2015) utilize semi-supervised techniques, where only a handful of annotated examples from the target language is available.
5.2.2 Multi-lingual Biasing.
Some papers leverage typological features to gear the shared parameters of a joint multilingual model toward the properties of a specific language. Daiber, Stanojević, and Sima’an (2016) develop a reordering algorithm that estimates the permutation probabilities of aligned word pairs in multi-lingual parallel texts. The best sequence of permutations is inferred via k-best graph search in a finite state automaton, producing a lattice. This algorithm, which receives lexical, morphological, and syntactic features of the source word pairs and typological features of the target language as input, was shown to benefit a downstream machine translation task.
Moreover, typological features (in the form of implicational universals) can guide the design of Bayesian networks. Schone and Jurafsky (2001) assign part-of-speech labels to word clusters acquired in an unsupervised fashion. The underlying network is acyclic and directed, and is converted to a join-tree network to handle multiple parents (Jensen 1996). For instance, the sub-graph for the ordering of numerals and nouns is intertwined also with properties of adjectives and adpositions. The final objective maximizes the probability of a tag Ti and a feature set Φi, given the implicational universals U as argmaxTP(|U).
5.3 Data Selection, Synthesis, and Preprocessing
Another way in which typological features are used in NLP is to guide data selection. This procedure is crucial for (1) language transfer methods, as it guides the choice of the most suitable source languages and examples; and (2) multilingual joint models, in order to weigh the contribution of each language and example. The selection is typically carried out through general language similarity metrics. For instance, Deri and Knight (2016) base their selection on the URIEL language typology database, considering information about genealogical, geographic, syntactic, and phonetic properties. This facilitates language transfer of grapheme-to-phoneme models, by guiding the choice of source languages and aligning phoneme inventories.
Metrics for source selection can also be extracted in a data-driven fashion, without explicit reference to structured taxonomies. For instance, Rosa and Zabokrtsky (2015) estimate the Kullback–Leibler divergence between PoS trigram distributions for delexicalized parser transfer. In order to approximate the divergence in syntactic structures between languages, Ponti et al. (2018a) utilize the Jaccard distance between morphological feature sets and the tree edit distance of delexicalized dependency parses of translationally equivalent sentences.
A priori and bottom–up approaches can also be combined. For delexicalized parser transfer, Agić (2017) relies on a weighted sum of distances based on (1) the PoS divergence defined by Rosa and Zabokrtsky (2015); (2) the character-based identity prediction of the target language; and (3) the Hamming distance from the target language typological vector. In fact, they have different weaknesses: Language identity (and consequently typology) fails to abstract away from language scripts. On the other hand, the accuracy of PoS-based metrics deteriorates easily in scenarios with scarce amounts of data.
Source language selection is a special case of source language weighting where weights are one-hot vectors. However, weights can also be gradient and consist of real numbers. Søgaard and Wulff (2012) adapt delexicalized parsers by weighting every training instance based on the inverse of the Hamming distance between typological (or genealogical) features in source and target languages. An equivalent bottom–up approach is developed by Søgaard (2011), who weighs source language sentences based on the perplexity between their coarse PoS tags and the predictions of a sequential model trained on the target language.
Alternatively, the lack of target annotated data can be alleviated by synthesizing new examples, thus boosting the variety and amount of the source data. For instance, the Galactic Dependency Treebanks stem from real trees whose nodes have been permuted probabilistically, according to the word orders of nouns and verbs in other languages (Wang and Eisner 2016). Synthetic trees improve the performance of model transfer for parsing when the source is chosen in a supervised way (performance on target development data) and in an unsupervised way (coverage of target PoS sequences).
Rather than generating new synthetic data, Ponti et al. (2018a) leverage typological features to pre-process treebanks in order to reduce their variation in language transfer tasks. In particular, they adapt source trees to the typology of a target language with respect to several constructions in a rule-based fashion. For instance, relative clauses in Arabic (Afro–Asiatic) with an indefinite antecedent drop the relative pronoun, which is mandatory in Portuguese (Indo–European). Hence, the pronoun has to be added, or deleted in the other direction. Feeding pre-processed syntactic trees to lexicalized syntax-based neural models, such as feature-based recurrent encoders (Sennrich and Haddow 2016) or TreeLSTMs (Tai, Socher, and Manning 2015), achieves state-of-the-art results in Neural Machine Translation and cross-lingual sentence similarity classification.
5.4 Comparison
In light of the performance of the described methods, to what extent can typological features benefit downstream NLP tasks and applications? To answer this key question, consider the performance scores of each model reported in Figure 10. Each model has been evaluated in the original paper in one (or more) of the three main settings, with otherwise identical architecture and hyper-parameters: with gold database features (Typology), with latently inferred typological features (Data-driven), or without both (Baseline).
It is evident that typology-enriched models consistently outperform baselines across several NLP tasks. Indeed, the scores are higher for metrics that increase (Unlabeled Attachment Score, F1 Score, and BLEU) and lower for metrics that decrease (Word Error Rate, Mean Average Error, and Perplexity) with better predictions. Nevertheless, improvements tend to be moderate, and only a small number of experiments support them with statistical significance tests. In general, it appears that they fall short of the potential usefulness of typology: in § 6 we analyze the possible reasons for this.
Some of the experiments we have surveyed investigate the effect of substituting typological features with features related to Genealogy and Language Identity (e.g., one-hot encoding of languages). Based on the results in Figure 10, it is unclear whether typology should be preferred, as it is sometimes rivaled by other types of features. In particular, it is typology that excels according to Tsvetkov et al. (2016), genealogy according to Søgaard and Wulff (2012) and Täckström, McDonald, and Nivre (2013), and language identity according to Ammar et al. (2016). However, drawing conclusions from the last experiment seems incautious: In § 4.2, we argued that their selection of features (presented in Figure 5) is debatable because of low diversification or noise. Moreover, it should be emphasized that one-hot language encoding is limited to the joint multilingual learning setting: Because it does not convey any information, it is of no avail in language transfer.
Finally, let us consider the effectiveness of the methods described in § 5.2 with respect to incorporating typological features in NLP models. In case of selective sharing, the tensor-based discriminative model (Zhang and Barzilay 2015) outperforms the graph-based discriminative model (Täckström, McDonald, and Nivre 2013), which in turn surpasses the generative model (Naseem, Barzilay, and Globerson 2012). With regard to biasing multilingual models, there is a clear tendency toward letting typological features interact not merely with the input representation, but also with deeper levels of abstraction such as hidden layers.
Overall, this comparison supports the claim that typology can potentially aid in designing the architecture of algorithms, engineering their features, and selecting and pre-processing their data. Nonetheless, this discussion also revealed that many challenges lie ahead for each of these goals to be accomplished fully. We discuss them in the next section.
6. Future Research Avenues
In § 5 we surveyed the current uses of typological information in NLP. In this section we discuss potential future research avenues that may result in a closer and more effective integration of linguistic typology and multilingual NLP. In particular, we discuss: (1) the extension of existing methods to new tasks, possibly exploiting typological resources that have been neglected thus far (§ 6.1); (2) new methods for injecting typological information into NLP models as soft constraints or auxiliary objectives (§ 6.2); and (3) new ways to acquire and represent typological information that reflect the gradient and contextual nature of cross-lingual variation (§ 6.3).
6.1 Extending the Usage to New Tasks and Features
The trends observed in § 5 reveal that typology is integrated into NLP models mostly in the context of morphosyntactic tasks, and particularly syntactic parsing. Some exceptions include other levels of linguistic structure, such as phonology (Tsvetkov et al. 2016; Deri and Knight 2016) and semantics (Bender 2016; Ponti et al. 2018a). As a consequence, the set of selected typological features is mostly limited to a handful of word-order features from a single database, WALS. Nonetheless, the array of tasks that pertain to polyglot NLP is broad, and other typological data sets that have thus far been neglected may be relevant for them.
For example, typological frame semantics might benefit semantic role labeling, as it specifies the valency patterns of predicates across languages, including the number of arguments, their morphological markers, and their order. This information can be cast in the form of priors for unsupervised syntax-based Bayesian models (Titov and Klementiev 2012), guidance for alignments in annotation projection (Padó and Lapata 2009; Van der Plas, Merlo, and Henderson 2011), or regularizers for model transfer in order to tailor the source model to the grammar of the target language (Kozhevnikov and Titov 2013). Cross-lingual information about frame semantics can be extracted, for example, from the Valency Patterns Leipzig database (ValPaL).
Typological information regarding lexical semantics patterns can further assist various NLP tasks by providing information about translationally equivalent words across languages. Such information is provided in databases such as the World Loanword Database (WOLD), the Intercontinental Dictionary Series (IDS), and the Automated Similarity Judgment Program (ASJP). One example task is word sense disambiguation, as senses can be propagated from multilingual word graphs (Silberer and Ponzetto 2010) by bootstrapping from a few pivot pairs (Khapra et al. 2011), by imposing constraints in sentence alignments and harvesting bag-of-words features from these (Lefever, Hoste, and De Cock 2011), or by providing seeds for multilingual Word-Embedding-based lexicalized model transfer (Zennaki, Semmar, and Besacier 2016).
Another task where lexical semantics is crucial is sentiment analysis, for similar reasons: Bilingual lexicons constrain word alignments for annotation projection (Almeida et al. 2015) and provide pivots for shared multilingual representations in model transfer (Fernández, Esuli, and Sebastiani 2015; Ziser and Reichart 2018). Moreover, sentiment analysis can leverage morphosyntactic typological information about constructions that alter polarity, such as negation (Ponti, Vulić, and Korhonen 2017).
Finally, morphological information was shown to aid interpreting the intrinsic difficulty of texts for language modeling and neural machine translation, both in supervised (Johnson et al. 2017) and in unsupervised (Artetxe et al. 2018) set-ups. In fact, the degree of fusion between roots and inflectional/derivative morphemes impacts the type/token ratio of texts, and consequently their rate of infrequent words. Moreover, the ambiguity of mapping between form and meaning of morphemes determines the usefulness of injecting character-level information (Gerz et al. 2018a, 2018b). This variation has to be taken into account in both language transfer and multilingual joint learning.
As a final note, we stress that the addition of new features does not concern just future work, but also the existing typology-savvy methods, which can widen their scope. For instance, the parsing experiments grounded on selective sharing (§ 5.2) could also take into consideration WALS features about Nominal Categories, Nominal Syntax, Verbal Categories, Simple Clauses, and Complex Sentences, as well as features from other databases such as SSWL, APiCS, and AUTOTYP. Likewise, models for phonological tasks (Tsvetkov et al. 2016; Deri and Knight 2016) could also extract features from typological databases such as LAPSyD and StressTyp2.
6.2 Injecting Typological Information into Machine Learning Algorithms
In § 5, we discussed the potential of typological information to provide guidance to NLP methods, and surveyed approaches such as network design in Bayesian models (Schone and Jurafsky 2001), selective sharing (Naseem, Barzilay, and Globerson 2012, inter alia), and biasing of multilingual joint models (Ammar et al. 2016, inter alia). However, many other frameworks (including those already mentioned in § 3) have been developed independently in order to allow the integration of expert and domain knowledge into traditional feature-based machine learning algorithms and neural networks. In this section we survey these frameworks and discuss their applicability to the integration of typological information into NLP models.
Encoding cross-language variation and preferences into a machine learning model requires a mechanism that can bias the learning (i.e., training and parameter estimation) and inference (prediction) of the model toward some pre-defined knowledge. In practice, learning algorithms, both linear (e.g., structured perceptron [Collins 2002], MIRA [Crammer and Singer 2003] and structured support vector machine [Taskar, Guestrin, and Koller 2004]) and non-linear (deep neural models) iterate between an inference step and a step of parameter update with respect to a gold standard. The inference step is the natural place where external knowledge could be encoded through constraints. This step biases the prediction of the model to agree with the external knowledge which, in turn, affects both the training process and the final prediction of the model at test time.
Information about cross-lingual variation, particularly when extracted empirically (see § 4), reflects tendencies rather than strict rules. As a consequence, soft, rather than hard constraints are a natural vehicle for their encoding. The goal of an inference algorithm is to predict the best output label according to the current state of the model parameters.8 For this purpose, the algorithm searches the space of possible output labels in order to find the best one. Efficiency hence plays a key role in these algorithms. Introducing soft constraints into an inference algorithm, therefore, posits an algorithmic challenge: How can the output of the model be biased to agree with the constraints while the efficiency of the search procedure is kept? In this article we do not answer this question directly but rather survey a number of approaches that succeed in dealing with it.
Because linear models have been prominent in NLP research for a much longer time, it is not surprising that frameworks for the integration of soft constraints into these models are much more developed. The approaches proposed for this purpose include posterior regularization (PR) (Ganchev et al. 2010), generalized expectation (GE) (Mann and McCallum 2008), constraint-driven learning (CODL) (Chang, Ratinov, and Roth 2007), dual decomposition (DD) (Globerson and Jaakkola 2007; Komodakis, Paragios, and Tziritas 2011), and Bayesian modeling (Cohen 2016). These techniques use different types of knowledge encoding—for example, PR uses expectation constraints on the posterior parameter distribution, GE prefers parameter settings where the model’s distribution on unsupervised data matches a predefined target distribution, CODL enriches existing statistical models with Integer Linear Programming constraints, and in Bayesian modeling a prior distribution is defined on the model parameters.
PR has already been used for incorporating universal linguistic knowledge into an unsupervised parsing model (Naseem et al. 2010). In the future, it could be extended to typological knowledge, which is a good fit for soft constraints. As another option, Bayesian modeling sets prior probability distributions according to the relationships encoded in typological features (Schone and Jurafsky 2001). Finally, DD has been applied to multi-task learning, which paves the way for typological knowledge encoding through a multi-task architecture in which one of the tasks is the actual NLP application and the other is the data-driven prediction of typological features. In fact, a modification of this architecture has already been applied to minimally supervised learning and domain adaptation with soft (non-typological) constraints (Reichart and Barzilay 2012; Rush et al. 2012).
The same ideas could be exploited in deep learning algorithms. We have seen in § 3.2 that multilingual joint models combine both shared and language-dependent parameters in order to capture the universal properties and cross-lingual differences, respectively. In order to enforce this division of roles more efficiently, these models could be augmented with the auxiliary task of predicting typological features automatically. This auxiliary objective could update parameters of the language-specific component, or those of the shared component, in an adversarial fashion, similar to what Chen et al. (2018) implemented by predicting language identity.
Recently, Hu et al. (2016a, 2016b) and Wang and Poon (2018) proposed frameworks that integrate deep neural models with manually specified or automatically induced constraints. Similar to CODL, the focus in Hu et al. (2016a) and Wang and Poon (2018) is on logical rules, while the ideas in Hu et al. (2016b) are related to PR. These frameworks provide a promising avenue for the integration of typological information and deep models.
A particular non-linear deep learning domain where knowledge integration is already prominent is multilingual representation learning (§ 3.3). In this domain, a number of works (Faruqui et al. 2015; Rothe and Schütze 2015; Mrkšić et al. 2016; Osborne, Narayan, and Cohen 2016) have proposed means through which external knowledge sourced from linguistic resources (such as WordNet, BabelNet, or lists of morphemes) can be encoded in word embeddings. Among the state-of-the-art specialization methods attract-repel (Mrkšić et al. 2017; Vulić et al. 2017) pushes together or pulls apart vector pairs according to relational constraints, while preserving the relationship between words in the original space and possibly propagating the specialization knowledge to unseen words or transferring it to other languages (Ponti et al. 2018b). The success of these works suggests that a more extensive integration of external linguistic knowledge in general, and typological knowledge in particular, is likely to play a key role in the future development of word representations.
6.3 A New Typology: Gradience and Context-Sensitivity
As shown in § 4.2, most of the typology-savvy algorithms thus far exploited features extracted from manually crafted databases. However, this approach is riddled with several shortcomings, which are reflected in the small performance improvements observed in § 5.4. Luckily, these shortcomings may potentially be averted through the use of methods that allow typological information to emerge from the data in a bottom-up fashion, rather than being predetermined. In what follows we advocate for such a data-driven approach, based on several considerations.
First, typological databases provide incomplete documentation of the cross-lingual variation, in terms of features and languages. Raw textual data, which is easily accessible for many languages and is cost-effective, may provide a valid alternative that can facilitate automatic learning of more complete knowledge. Second, database information is approximate, as it is restricted to the majority strategy within a language. However, in theory each language allows for multiple strategies in different contexts and with different frequencies, hence databases risk hindering models from learning less-likely but plausible patterns (Sproat 2016). Inferring typological information from text would enable a system to discover patterns within individual examples, including both the frequent and the infrequent ones. Thirdly, typological features in databases are discrete, utilizing predefined categories devised to make high-level generalizations across languages. However, several categories in natural language are gradient (see for instance the discussion on semantic categorization in § 2), hence they are better captured by continuous features. In addition to being psychologically motivated, this sort of gradient representation is also more compatible with machine learning algorithms and particularly with deep neural models that naturally operate with real-valued multi-dimensional word embeddings and hidden states.
To sum up, the automatic development of typological information and its possible integration into machine learning algorithms have the potential to solve an important bottleneck in polyglot NLP. Current manually curated databases consist of incomplete, approximate, and discrete features that are intended to reflect contextual and gradient information implicitly present in text. These features are fed to continuous, probabilistic, and contextual machine learning models—which do not form a natural fit for the typological features. Instead, we believe that modeling cross-lingual variation directly from textual data can yield typological information that is more suitable for machine learning.
Several techniques surveyed in § 4.3 are suited to serve this purpose. In particular, the extraction from morphosyntactic annotation (Liu 2010, inter alia) and alignments from multi-parallel texts (Asgari and Schütze 2017, inter alia) provide information about typological constructions at the level of individual examples. Moreover, language vectors (Malaviya, Neubig, and Littell 2017; Bjerva and Augenstein 2018) and alignments from multi-parallel texts preserve the gradient nature of typology through continuous representations.
The successful integration of these components would affect the way multilingual feature engineering is performed. As opposed to using binary vectors of typological features, the information about language-internal variation could be encoded as real-valued vectors, where each dimension is a possible strategy for a given construction and its relative frequency within a language.
As an alternative, selective sharing and multilingual biasing could be performed at the level of individual examples rather than languages as a whole. In particular, model parameters could be transferred among similar examples; and input/hidden representations could be conditioned on contextual typological patterns. Finally, focusing on the various instantiations of a particular type rather than considering languages as indissoluble blocks would enhance data selection, similar to what Søgaard (2011) achieved using PoS n-grams for similarity measurement. The selection of similar sentences rather than similar languages as source data in language transfer is likely to yield large improvements, as demonstrated by Agić (2017) for parsing in an oracle setting.
Finally, the bottom–up development of typological features may also address radically resource-less languages that lack even raw textual data in a digital format. For this group, which still constitutes a large portion of the world’s languages, there are often available reference grammars written by field linguists, which are the ultimate source for typological databases. These grammars could be queried automatically, and fine-grained typological information could be harvested through information extraction techniques.
7. Conclusions
In this article, we surveyed a wide range of approaches integrating typological information, derived from the empirical and systematic comparison of the world’s languages, and NLP algorithms. The most fundamental problem for the advancement of this line of research is bridging the gap between the interpretable, language-wide, and discrete features of linguistic typology found in database documentation, and the opaque, contextual, and probabilistic models of NLP. We addressed this problem by exploring a series of questions: (i) for which tasks and applications is typology useful? (ii) What are the advantages and limitations of currently available typological databases? Can data-driven inference of typological features offer an alternative source of information? (iii) Which methods allow us to inject typological information from external resources, and how should such information be encoded? (iv) By which margin do typology-savvy methods surpass typology-agnostic baselines? How does typology compare to other criteria of language classification, such as genealogy? (v) In addition to augmenting machine learning algorithms, which other purposes do typology serve for NLP? We summarize our key findings here:
- 1.
Typological information is currently used predominantly for morphosyntactic tasks, in particular dependency parsing. As a consequence, these approaches typically select a limited subset of features from a single data set (WALS) and focus on a single aspect of variation (typically word order). However, typological databases also cover other important features, related to predicate–argument structure (ValPaL), phonology (LAPSyD, PHOIBLE, StressTyp2), and lexical semantics (IDS, ASJP), which are currently largely neglected by the multilingual NLP community. In fact, these features have the potential to benefit many tasks addressed by language transfer or joint multilingual learning techniques, such as semantic role labeling, word sense disambiguation, or sentiment analysis.
- 2.
Typological databases tend to be incomplete, containing missing values for individual languages or features. This hinders the integration of the information in such databases into NLP models; and therefore, several techniques have been developed to predict missing values automatically. They include heuristics derived from morphosyntactic annotation; propagation from other languages based on hierarchical clusters or similarity metrics; supervised models; and distributional methods applied to multi-parallel texts. However, none of these techniques surpasses the others across the board in prediction accuracy, as each excels in different feature types. A challenge left for future work is creating ensembles of techniques to offset their individual disadvantages.
- 3.
The most widespread approach to exploit typological features in NLP algorithms is “selective sharing” for language transfer. Its intuition is that a model should learn universal properties from all examples, but language-specific information only from examples with similar typological properties. Another successful approach is gearing multilingual joint models toward specific languages by concatenating typological features in input, or conditioning hidden layers and global sequence representations on them. New approaches could be inspired by traditional techniques for encoding external knowledge into machine learning algorithms through soft constraints on the inference step, semi-supervised prototype-driven methods, specialization of semantic spaces, or auxiliary objectives in a multi-task learning setting.
- 4.
The integration of typological features into NLP models yields consistent (even if often moderate) improvements over baselines lacking such features. Moreover, guidance from typology should be preferred to features related to genealogy or other language properties. Models enriched with the latter features occasionally perform equally well due to their correlation with typological features, but fall short when it comes to modeling diversified language samples or fine-grained differences among languages.
- 5.
In addition to feature engineering, typological information has served several other purposes. Firstly, it allows experts to define rule-based models, or to assign priors and independence assumptions in Bayesian graphical models. Secondly, it facilitates data selection and weighting, at the level of both languages and individual examples. Annotated data can also be synthesized or preprocessed according to typological criteria, in order to increase their coverage of phenomena or availability for further languages. Thirdly, typology enables researchers to interpret and reasonably foresee the difference in performance of algorithms across the sampled languages.
In general, we demonstrated that typology is relevant to a wide range of NLP tasks and provides a quite effective and principled way to carry out language transfer and multilingual joint learning. We hope that the research described in this survey will provide a platform for deeper integration of typological information and NLP techniques, thus furthering the advancement of multilingual NLP.
Acknowledgments
This work is supported by ERC Consolidator grant LEXICAL (no. 648909).
Notes
These counts include only languages traditionally spoken by a community as their principal means of communication, and exclude unattested, pidgin, whistled, and sign languages.
Exception-less generalizations are known as absolute universals. However, properties that have been proposed as such are often controversial, because they are too vacuous or have been eventually falsified (Evans and Levinson 2009).
According to Lewis, Simons, and Fennig (2016), 34.4% of the world’s languages are threatened, not transmitted to younger generations, moribund, nearly extinct, or dormant. Moreover, 34% of the world’s languages are vigorous but have not yet developed a system of writing.
This approach is also more cost-effective in terms of parameters (Pappas and Popescu-Belis 2017).
This does not apply to isolates, however: by definition, no genealogical information is available for these languages. Hence, typology is the only source of information about their properties.
Clustering was performed through the complete linkage method.
Notwithstanding they have different preferences over word orders (Liu 2010).
Generally speaking, an inference algorithm can make other predictions, such as computing expectations and marginal probabilities. Since in the context of this article we are mostly focused on the prediction of the best output label, we refer only to this type of inference problems.