Abstract
Active learning (AL) uses a data selection algorithm to select useful training samples to minimize annotation cost. This is now an essential tool for building low-resource syntactic analyzers such as part-of-speech (POS) taggers. Existing AL heuristics are generally designed on the principle of selecting uncertain yet representative training instances, where annotating these instances may reduce a large number of errors. However, in an empirical study across six typologically diverse languages (German, Swedish, Galician, North Sami, Persian, and Ukrainian), we found the surprising result that even in an oracle scenario where we know the true uncertainty of predictions, these current heuristics are far from optimal. Based on this analysis, we pose the problem of AL as selecting instances that maximally reduce the confusion between particular pairs of output tags. Extensive experimentation on the aforementioned languages shows that our proposed AL strategy outperforms other AL strategies by a significant margin. We also present auxiliary results demonstrating the importance of proper calibration of models, which we ensure through cross-view training, and analysis demonstrating how our proposed strategy selects examples that more closely follow the oracle data distribution. The code is publicly released here.1
1 Introduction
Part-of-speech (POS) tagging is a crucial step for language understanding, both being used in automatic language understanding applications such as named entity recognition (NER; Ankita andNazeer, 2018) and question answering (QA; Wanget al., 2018), but also being used in manual language understanding by linguists who are attempting to answer linguistic questions or document less-resourced languages (Anastasopoulos et al., 2018). Much prior work (Huang et al., 2015; Bohnet et al., 2018) on developing high-quality POS taggers uses neural network methods, which rely on the availability of large amounts of labelled data. However, such resources are not readily available for the majority of the world’s 7000 languages (Hammarström et al., 2018). Furthermore, manually annotating large amounts of text with trained experts is an expensive and time-consuming task, even more so when linguists/annotators might not be native speakers of the language.
Active Learning (Lewis, 1995; Settles, 2009, AL) is a family of methods that aim to train effective models with less human effort and cost by selecting such a subset of data that maximizes the end model performance. Although many methods have been proposed for AL in sequence labeling (Settles and Craven, 2008; Marcheggiani and Artières, 2014; Fang and Cohn, 2017), through an empirical study across six typologically diverse languages we show that within the same task setup these methods perform inconsistently. Furthermore, even in an oracle scenario, where we have access to the true labels during data selection, existing methods are far from optimal.
We posit that the primary reason for this inconsistent performance is that while existing methods consider uncertainty in predictions, they do not consider the direction of the uncertainty with respect to the output labels. For instance, in Figure 1 we consider the German token “die,” which may be either a pronoun (PRO) or determiner (DET). According to the initial model (iteration 0), “die” was labeled as PRO majority of the time, but a significant amount of probability mass was also assigned to other output tags (OTHER) for many examples. Based on this, existing AL algorithms that select uncertain tokens will likely select “die” because it is frequent and its predictions are not certain, but they may select an instance of “die” with either a gold label of PRO or DET. Intuitively, because we would like to correct errors where tokens with true labels of DET are mislabeled by the model as PRO, asking the human annotator to tag an instance with a true label of PRO, even if it is uncertain, is not likely to be of much benefit.
Inspired by this observation, we pose the problem of AL for POS tagging as selecting tokens that maximally reduce the confusion between the output tags. For instance, in the example, we would attempt to pick a token-tag pair “die/DET” to reduce potential errors of the model over-predicting PRO despite its belief that DET is also a plausible option. We demonstrate the features of this model in an oracle setting where we know true model confusions (as in Figure 1), and also describe how we can approximate this strategy when we do not know the true confusions.
We evaluate our proposed AL method by running simulation experiments on six typologically diverse languages, namely, German, Swedish, Galician, North Sami, Persian, and Ukrainian, improving upon models seeded with crosslingual transfer from related languages (Cotterell and Heigold, 2017). In addition, we conduct human annotation experiments on Griko, an endangered language that truly lacks significant resources. Our contributions are as follows:
We empirically demonstrate the shortcomings of existing AL methods under both conventional and “oracle” settings. Based on the subsequent analysis, we propose a new AL method that achieves +2.92 average per-token accuracy improvement over existing methods under conventional settings, and a +2.08 average per-token accuracy improvement under the oracle setting.
We conduct extensive analysis measuring how the selected data using our proposed AL method closely matches the oracle data distribution.
We further demonstrate the importance of model calibration, the accuracy of the model’s probability estimates themselves, and demonstrate that cross-view training (Clark et al., 2018) is an effective way to improve calibration.
We perform human annotation using the proposed method on an endangered language, Griko, and find our proposed method to perform better than the existing methods. In this process, we collect 300 new token-level annotations which will help further Griko NLP.
2 Background: Active Learning
Generally, AL methods are designed to select data based on two criteria: “informativeness” and “representativeness” (Huang et al., 2010). Informativeness represents the ability of the selected data to reduce the model uncertainty on its predictions, and representativeness measures how well the selected data represent the entire unlabeled data. AL is an iterative process and is typically implemented in a batched fashion for neural models (Sener and Savarese, 2018). In a given iteration, a batch of data is selected using some heuristic on which the end model is trained until convergence. This trained model is then used to select the next batch for annotation, and so forth.
In this work we focus on token-level AL methods, which require annotation of individual tokens in context, rather than full sequence annotation, which is more time consuming. Given an unlabeled pool of sequences D = {x1, x2, ⋯, xn} and a model θ, Pθ(yi, t = j∣xi) denotes the output probability of the output tag produced by the model θ for the token xi,t in the input sequence xi. denotes the set of POS tags. Most popular methods (Settles, 2009; Fang and Cohn, 2017) define the “informativeness” using either uncertainty sampling or query-by-committee. We provide a brief review of these existing methods.
- • Uncertainty Sampling (uns; Fang and Cohn, 2017) selects the most uncertain word types in the unlabeled corpus D for annotation. First, it calculates the token entropy H(xi,t; θ) for each unlabeled sequence xi ∈ D under model θ, defined asNext, this entropy is aggregated over all token occurrences across D to get an uncertainty score SUNS(z) for each word type z:
- • Query-by-Commitee (qbc; Settles andCraven,2008) selects the tokens having the highest disagreement between a committee of models C = {θ1,θ2,θ3,⋯ }, which is aggregated over all token occurrences. The token level disagreement scores are defined aswhere V (y) is number of “votes” received for the token label y. is the prediction with the highest score according to model θc for the token xi,t. These disagreement scores are then aggregated over word types:
Failings of Current AL Methods
Although these methods are widely used, in a preliminary empirical study we found that these existing methods are less than optimal, and fail to bring consistent gains across multiple settings. Ideally, having a single strategy that performs the best across a diverse language set is useful for other researchers who plan to use AL for new languages. Instead of researchers experimenting with different strategies with human annotation, which is costly, having a single strategy known a priori will reduce both time and human annotation effort. Specifically, we demonstrate this problem of inconsistency through a set of oracle experiments, where the data selection algorithm has access to the true labels. We hope that these experiments serve as an upper-bound for their non-oracle counterparts, so if existing methods do not achieve gains even in this case, they will certainly be even less promising when true labels are not available at data selection time, as is the case in standard AL.
Concretely, as an oracle uncertainty sampling method uns-oracle, we select types with the highest negative log likelihood of their true label. As an “oracle” QBC method qbc-oracle, we select types having the largest number of incorrect predictions. We conduct 20 AL iterations for each of these methods across six typologically diverse languages.2
First, we observe that between the oracle methods (Figure 2) no method consistently performs the best across all six languages. Second, we find that just considering uncertainty leads to unbalanced selection of the resulting tags. To drive this point across, Table 1 shows the output tags selected for the German token “zu” across multiple iterations. uns-oracle selects the most frequent output tag, failing to select tokens from other output tags. Whereas qbc-oracle selects tokens having multiple tags, the distribution is not in proportion with the true tag distribution. Our hypothesis is that this inconsistent performance occurs because none of the methods consider the confusion between output tags while selecting data. This is especially important for POS tagging because we find that the existing methods tend to select highly syncretic word types. Syncretism is a linguistic phenomenon where distinctions required by syntax are not realized by morphology, meaning a word type can have multiple POS tags based on context.3 This is expected because syncretic word types, owing to their inherent ambiguity, cause high uncertainty, which is the underlying criterion for most AL methods.
. | qbc-oracle . | uns-oracle . |
---|---|---|
iteration-1 | PART=1 | ADP=1 |
iteration-2 | PART=1,ADP=1 | ADP=2 |
iteration-3 | ADV=1,PART=1,ADP=1 | ADP=2 |
iteration-4 | ADV=1,PART=1,ADP=2 | ADP=3 |
. | qbc-oracle . | uns-oracle . |
---|---|---|
iteration-1 | PART=1 | ADP=1 |
iteration-2 | PART=1,ADP=1 | ADP=2 |
iteration-3 | ADV=1,PART=1,ADP=1 | ADP=2 |
iteration-4 | ADV=1,PART=1,ADP=2 | ADP=3 |
3 Confusion-Reducing Active Learning
To address the limitations of the existing methods, we propose a confusion-reducing active learning (cral) strategy, which aims at reducing the confusion between the output tags. In order to combine both ‘‘informativeness’’ and ‘‘representativeness’’, we follow a two-step algorithm:
- Find the most confusing word types. The goal of this step is to find b word types that would maximally reduce the model confusion within the output tags. For each token xi, t in the unlabeled sequence xi ∈ D, we first define the confusion as the sum of probability Pθ(yi,t = j∣xi) of all output tags other than the highest probability output tag :then sum this over all instances of type z:Again selecting the top b types having the highest score (given by ) gives us the most confusing word types (XINIT). For each token, we also store the output tag that had the second highest probability which we refer to as the “most confusing output tag” for a particular xi, t:For each word type z, we aggregate the frequency of the most confusing output tag across all token occurrences:and compute the output tag with the highest frequency as the most confusing output tag for type z:For each of the top b most confusing word types, we retrieve its most confusing output tag, resulting in type-tag pairs given by LINIT = { 〈z1,j1〉, ⋯ 〈zb,jb〉 }. This process is illustrated in steps 7—14 in Algorithm 1.
- Find the most representative token instances. Now that we have the most confusing type-tag pairs LINIT, our final step is selecting the most representative token instances for annotation. For each type-tag tuple 〈zk,jk〉 ∈ LINIT, we first retrieve contextualized representations for all token occurrences (xi,t = zk) of the word-type zk from the encoder of the POS model. We express this in shorthand as ci,t := enc(xi,t). Because the true labels are unknown, there is no certain way of knowing which tokens have the “most confusing output tag” as the true label. Therefore, each token representation ci,t is weighted with the model confidence of the most confusing tag jk given byFinally, the token instance that is closest to the centroid of this weighted token set becomes the most representative instance for annotation. Going forward, we also refer to the most representative token instance as the centroid for simplicity.4 This process is repeated for each of the word-types zk, resulting in the to-label set XLABEL. This is illustrated in steps 14—19 in Algorithm 1.
4 Model and Training Regimen
4.1 Model Architecture
Our POS tagging model is a hierarchical neural conditional random field (CRF) tagger (Ma and Hovy, 2016; Lample et al., 2016; Yang et al., 2017). Each token (x,t) from the input sequence x is first passed through a character-level Bi-LSTM, followed by a self-attention layer (Vaswani et al., 2017), followed by another Bi-LSTM to capture information about subword structure of the words. Finally, these character-level representations are fed into a token-level Bi-LSTM in order to create contextual representations , where and are the representations from the forward and backward LSTMs, and “:” denotes the concatenation operation. The encoded representations are then used by the CRF decoder to produce the output sequence.
4.2 Cross-view Training Regimen
In order to further improve this model, we apply cross-view training (CVT), a semi-supervised learning method (Clark et al., 2018). On unlabeled examples, CVT trains auxiliary prediction modules, which look at restricted “views” of an input sequence, to match the prediction from the full view. By forcing the auxiliary modules to match the full-view module, CVT improves the model’s representation learning. Not only does it help in improving the downstream performance under low-resource conditions, but also improves the model calibration overall (§5.4). Having a well-calibrated model is quite useful for AL, as a well-calibrated model tends to assign lower probabilities to “true” incorrect predictions, which allows the AL measure to select these incorrect tokens for annotation.
4.3 Cross-Lingual Transfer Learning
Using the architecture described above, for any given target language we first train a POS model on a group of related high-resource languages. We then fine-tune this pre-trained model on the newly acquired annotations on the target language, as obtained from an AL method. The objective of cross-lingual transfer learning is to warm-start the POS model on the target language. Several methods have been proposed in the past including annotation projection (Zitouni and Florian, 2008), and model transfer using pre-trained models such as m-BERT (Devlin et al., 2019). In this work our primary focus is on designing an active learning method, so we simply pre-train a POS model on a group of related high-resource languages (Cotterelland Heigold, 2017), which is a computationally cheap solution, a crucial requirement for running multiple AL iterations. Furthermore, recent work (Siddhant et al., 2020) has shown the advantage of pre-training using a selected set of related languages over a model pre-trained over all available languages.
Following this, for a given target language we first select a set of typologically related languages. An initial set of transfer languages is obtained using the automated tool provided by Lin et al. (2019), which leverages features such as phylogenetic similarity, typology, lexical overlap, and size of available data, in order to predict a list of optimal transfer languages. This list can be then refined using the experimenter’s intuition. Finally, a POS model is trained on the concatenated corpora of the related languages. Similar to Johnson et al. (2017), a language identification token is added at the beginning and end of each sequence.
5 Simulation Experiments
In this section, we describe the simulation experiments used for evaluating our method. Under this setting, we use the provided training data as our unlabeled pool and simulate annotations by using the gold labels for each AL method.
Datasets:
For the simulation experiments, we test on six typologically diverse languages: German, Swedish, North Sami, Persian, Ukrainian, and Galician. We use data from the Universal Dependencies (UD) v2.3 (Nivre et al., 2016; Nivreet al., 2018; Kirov et al., 2018) treebanks with the same train/dev/test split as proposed in McCarthy et al. (2018). For each target language, the set of related languages used for pre-training is listed in Table 2. Persian and Urdu datasets being in the Perso-Arabic script, there is no orthography overlap along the transfer and the target languages. Therefore, for Persian we use uroman,5 a publicly available tool for romanization.
target language . | transfer languages (treebank) . |
---|---|
German (de-gsd) | English (en-ewt) + Dutch (nl-alpino) |
Swedish (sv-lines) | Norwegian (no-nynorsk) + Danish (da-ddt) |
North Sami (sme-giella) | Finnish (fi-ftb) |
Persian (fa-seraji) | Urdu (ur-udtb) + Russian (ru-gsd) |
Galician (gl-treegal) | Spanish (es-gsd) + Portuguese (pt-gsd) |
Ukrainian (uk-iu) | Russian (ru-gsd) |
Griko | Greek (el-gdt) + Italian (it-postwita) |
target language . | transfer languages (treebank) . |
---|---|
German (de-gsd) | English (en-ewt) + Dutch (nl-alpino) |
Swedish (sv-lines) | Norwegian (no-nynorsk) + Danish (da-ddt) |
North Sami (sme-giella) | Finnish (fi-ftb) |
Persian (fa-seraji) | Urdu (ur-udtb) + Russian (ru-gsd) |
Galician (gl-treegal) | Spanish (es-gsd) + Portuguese (pt-gsd) |
Ukrainian (uk-iu) | Russian (ru-gsd) |
Griko | Greek (el-gdt) + Italian (it-postwita) |
Baselines:
As described in Section §2, we compare our proposed method (cral) with Uncertainty Sampling (uns) and Query-by-commitee (qbc). We also compare with a random baseline (rand) that selects tokens randomly from the unlabeled data D. For qbc, we use the following committee of models C = {θfwd,θbwd,θfull}, where θi are the CVT views (§4.2). We do not include the θfut and θpst as they are much weaker in comparison to the other views.6 For cral, uns, and rand, we use the full model view.
Model Hyperparameters:
We use a hidden size of 25 for the character Bi-LSTM, 100 for the modeling layer, and 200 for the token-level Bi-LSTM. Character embeddings are 30-dimensional and are randomly initialized. We apply a dropout of 0.3 to the character embeddings before inputting to the Bi-LSTM. A further 0.5 dropout is applied to the output vectors of all Bi-LSTMs. The model is trained using the SGD optimizer with learning rate of 0.015. The model is trained till convergence over a validation set.
Active Learning Parameters:
For all AL methods, we acquire annotations in batches of 50 and run 20 simulation experiments resulting in a total of 1000 tokens annotated for each method. We pre-train the model using the above parameters and after acquiring annotations, we fine-tune it with a learning rate proportional to the number of sentences in the labeled data lr = 2.5e−5|XLABEL|.
5.1 Results
Figure 3 compares our proposed cral strategy with the existing baselines. The y-axis represents the difference in POS tagging performance between two AL methods and is measured by accuracy. The accuracy is averaged across 20 iterations. Across all six languages, our proposed method cral shows significant performance gains over the other methods. In Figure 4 we plot the individual accuracy values across the 20 iterations for German and we see that our proposed method cral performs consistently better across multiple iterations. We also see that the zero-shot model on German (iteration-0) gets a decent warm start because of cross-lingual transfer from Dutch and English.
Furthermore, to check how the performance of the AL methods is affected by the underlying POS tagger architecture, we conduct additional experiments with a different architecture. We replace the CRF layer with a linear layer and use token level softmax to predict the tags, keeping the encoder as before. We present the results for four (North Sami, Swedish, German, Galician) of the six languages in Figure 5. Our proposed method cral still always outperforms qbc. We observe that only for North Sami does uns outperform cral, which is similar to the results obtained from BRNN/CRF architecture, where the cral performs at par with uns.
5.2 Analysis
In the previous section, we compared the different AL methods by measuring the average POS accuracy. In this section, we perform intrinsic evaluation to compare the quality of the selected data on two aspects:
How similar are the selected and the true data distributions?
target language . | uns . | qbc . | cral . |
---|---|---|---|
German | 74 % | 76 % | 82 % |
Swedish | 56 % | 54 % | 62 % |
North-Sami | 10 % | 12 % | 14 % |
Persian | 50 % | 46 % | 46 % |
Galician | 40 % | 42 % | 44 % |
Ukrainian | 38 % | 48 % | 48 % |
target language . | uns . | qbc . | cral . |
---|---|---|---|
German | 74 % | 76 % | 82 % |
Swedish | 56 % | 54 % | 62 % |
North-Sami | 10 % | 12 % | 14 % |
Persian | 50 % | 46 % | 46 % |
Galician | 40 % | 42 % | 44 % |
Ukrainian | 38 % | 48 % | 48 % |
target language . | cral . | uns . | qbc . |
---|---|---|---|
German | 0.0465 | 0.0801 | 0.0849 |
Swedish | 0.0811 | 0.1196 | 0.1013 |
North Sami | 0.0270 | 0.0328 | 0.0346 |
Persian | 0.0384 | 0.0583 | 0.0444 |
Galician | 0.0722 | 0.0953 | 0.0674 |
Ukrainian | 0.0770 | 0.1067 | 0.0665 |
target language . | cral . | uns . | qbc . |
---|---|---|---|
German | 0.0465 | 0.0801 | 0.0849 |
Swedish | 0.0811 | 0.1196 | 0.1013 |
North Sami | 0.0270 | 0.0328 | 0.0346 |
Persian | 0.0384 | 0.0583 | 0.0444 |
Galician | 0.0722 | 0.0953 | 0.0674 |
Ukrainian | 0.0770 | 0.1067 | 0.0665 |
How effective is the AL method in reducing confusion across iterations?
Across iterations, as more data is acquired we expect the incorrect predictions from the previous iterations to be rectified in the subsequent iterations, ideally without damaging the accuracy of existing predictions. However, as seen in Table 3, the AL methods have a tendency to select syncretic word types, which suggests that across multiple iterations the same word types could get selected albeit under a different context. This could lead to more confusion, thereby damaging the existing accuracy if the selected type is not a good representative of its annotated tag. Therefore, we calculate the number of existing correct predictions that were incorrectly predicted in the subsequent iteration, and present the results in Figure 6. A lower value suggests that the AL method was effective in improving overall accuracy without damaging the accuracy from existing annotations, and thereby was successful in reducing confusion. From Figure 6, the proposed strategy cral is clearly more effective than the others in most cases in reducing confusion across iterations.
5.3 Oracle Results
In order to check how close to optimal our proposed method cral is, we conduct “oracle” comparisons, where we have access to the gold labels during data selection. The oracle versions of existing methods uns-oracle and qbc-oracle have already been described in Section §2. For our proposed method cral, we construct the oracle version as follows:
cral-oracle:
Select the types having the highest incorrect predictions. Within each type, select that output tag that is most incorrectly predicted. This gives the most confusing output tag for a given word type. From the tokens having the most confusing output tag, select the token representative by taking the centroid of their respective contextualized representations, similar to the procedure described in Section § 3.
Figure 7 compares the performance gain of the POS model trained using cral-oracle over uns-oracle and qbc-oracle (Figure 7.a, 7.b). Even under the “oracle” setting, our proposed method performs consistently better across all languages (except Ukrainian), unlike the existing methods, as seen in Figure 2. cral closely matches the performance of its corresponding ‘‘oracle’’ cral-oracle (Figure 7.c) which suggests that the proposed method is close to an optimal AL method. However, we note that cral-oracle is not a “true” upper bound as for Ukrainian it does not outperform cral. We find that for Ukrainian, up to 250 tokens, the oracle method outperforms the non-oracle method after which it underperforms. We hypothesize that this inconsistency is due to noisy annotations in Ukrainian. On analysis we found that the oracle method predicts numerals as NUM but in the gold data some of them are annotated as ADJ. We also find several tokens to have punctuations and numbers mixed with the letters.7
We also compare the percent of token-tag overlap between the data selected from cral with its oracle counterpart: cral-oracle. For the first AL iteration, the proposed method cral has more than 50% overlap with the oracle method for all languages, providing some evidence as to why cral is matching the oracle performance.
5.4 Effect of Cross-View Training
As mentioned in Section § 4.2, we use CVT to not only improve our model overall but also to have a well-calibrated model that can be important for active learning. A model is well-calibrated when a model’s predicted probabilities over the outcomes reflects the true probabilities over these outcomes (Nixon et al., 2019). We use Static Calibration Error (SCE), a metric proposed by Nixon et al. (2019) to measure the model calibration. SCE bins the model predictions separately for each output tag probability and computes the calibration error within each bin which is averaged across all the bins to produce a single score. For each output tag, bins are created by sorting the predictions based on the output class probability. Hence, the first 10% are placed in bin 1, the next 10% in bin 2, and so on. We conduct two ablation experiments to measure the effect of CVT. First, we train a joint POS model on English and Norwegian datasets using all available training data, and evaluate on the English test set. Second, we use this pre-trained model and fine-tune on 200 randomly sampled German data and evaluate on German test data. We train models with and without CVT, denoted by +/- in Table 5. We find that with CVT results both in higher accuracy as well as lower calibration error (SCE). This effect of CVT is much more pronounced in the second experiment, which presents a low-resource scenario and is common in an AL framework.
6 Human Annotation Experiment
In this section, we apply our proposed approach on Griko, an endangered language spoken by around 20,000 people in southern Italy, in the Grecìa Salentina area southeast of Lecce. The only available online Griko corpus, referred to as UoI (Lekakou et al., 2013),8 consists of 330 utterances by nine native speakers having POS annotations. Additionally, Anastasopoulos et al. (2018) collected, processed, and released 114 stories, of which only the first 10 stories were annotated by experts and have gold-standard annotations. We conduct human annotation experiments on the remaining unannotated stories in order to compare the different active learning methods.
Setup:
We use Modern Greek and Italian as the two related languages to train our initial POS model.9 To further improve the model, we fine-tune on the UoI corpus, which consists of 360 labeled sentences. We evaluate the AL performance on the 10 gold-labelled stories from Anastasopoulos et al. (2018), of which the first two stories, comprising 143 labeled sentences, are used as the validation set and the remaining 800 labeled sentences form the test set. We use the unannotated stories as our unlabeled pool. We compare cral with uns and qbc, conducting three AL iterations for each method, where each iteration selects roughly 50 tokens for annotation. The annotations are provided by two linguists familiar with modern Greek and somewhat familiar with Griko. To familiarize the linguists with the annotation interface, a practice session was conducted in modern Greek. In the interface, tokens that need to be annotated are highlighted and presented with their surrounding context. The linguist then simply selects the appropriate POS tag for each highlighted token. Because we do not have gold annotations for these experiments, we also obtained annotations from a third linguist who is more familiar with Griko grammar.
Results:
Table 6 presents the results on three iterations for each AL method, with our proposed method CRAL outperforming the other methods in most cases. We note that we found several frequent tokens (i.e., 863/13,740 tokens) in the supposedly gold-standard Griko test data to be inconsistently annotated. Specifically, the original annotations did not distinguish between coordinating (CCONJ) and subordinating conjunctions (SCONJ), unlike the UD schema. As a result, when converting the test data to the UD schema all conjunctions were tagged as subordinating ones. Our annotation tool, however, allowed for either CCONJ or SCONJ as tags and the annotators did make use of them. With the help of a senior Griko linguist (Linguist-3), we identified a few types of conjunctions that are always coordinating: variations of “and” (ce and c’), and of “or” (e or i). We fixed these annotations and used them in our experiments.
. | AL . | iteration-0 . | iteration-1 . | iteration-2 . | iteration-3 . | IA Agr. . | WD . |
---|---|---|---|---|---|---|---|
Linguist-1 | cral | 52.93 | 63.42 (10) | 69.07 (10) | 65.16 (16) | 0.58 | 0.281 |
qbc | 52.93 | 55.82 (15) | 62.03 (17) | 66.51 (15) | 0.68 | 0.243 | |
uns | 52.93 | 56.14 (15) | 57.04 (15) | 65.73 (11) | 0.58 | 0.379 | |
Linguist-2 | cral | 52.93 | 61.24 (15) | 67.24 (20) | 67.05 (18) | 0.70 | 0.346 |
qbc | 52.93 | 56.52 (20) | 65.96 (20) | 66.71 (17) | 0.72 | 0.245 | |
uns | 52.93 | 55.45 (17) | 58.80 (17) | 65.73 (20) | 0.70 | 0.363 | |
Linguist-3 (Expert) | cral | 52.93 | 65.63 | 69.17 | 68.09 | − | 0.159 |
qbc | 52.93 | 60.50 | 65.69 | 56.20 | − | 0.170 | |
uns | 52.93 | 58.51 | 64.26 | 65.93 | − | 0.125 |
. | AL . | iteration-0 . | iteration-1 . | iteration-2 . | iteration-3 . | IA Agr. . | WD . |
---|---|---|---|---|---|---|---|
Linguist-1 | cral | 52.93 | 63.42 (10) | 69.07 (10) | 65.16 (16) | 0.58 | 0.281 |
qbc | 52.93 | 55.82 (15) | 62.03 (17) | 66.51 (15) | 0.68 | 0.243 | |
uns | 52.93 | 56.14 (15) | 57.04 (15) | 65.73 (11) | 0.58 | 0.379 | |
Linguist-2 | cral | 52.93 | 61.24 (15) | 67.24 (20) | 67.05 (18) | 0.70 | 0.346 |
qbc | 52.93 | 56.52 (20) | 65.96 (20) | 66.71 (17) | 0.72 | 0.245 | |
uns | 52.93 | 55.45 (17) | 58.80 (17) | 65.73 (20) | 0.70 | 0.363 | |
Linguist-3 (Expert) | cral | 52.93 | 65.63 | 69.17 | 68.09 | − | 0.159 |
qbc | 52.93 | 60.50 | 65.69 | 56.20 | − | 0.170 | |
uns | 52.93 | 58.51 | 64.26 | 65.93 | − | 0.125 |
For Linguist-1, we observe a decrease in performance in Iteration-3. One possible reason for this decrease is attributed to Linguist-1’s poor annotation quality, which is also reflected in their low inter-annotator agreement scores. We observe a slight decrease for other linguists, which we hypothesize is due to domain mismatch between the annotated data and the test data. In fact, the test set stories and the unlabeled ones originate from different time periods spanning a century, which can lead to slight differences in orthography and usage. For instance, after three AL iterations, the token “i” had been annotated as CONJ twice and DET once, whereas in the test data all instances of “i” are annotated as DET. Similar to the simulation experiments, we compute the confusion score for all linguists in Figure 9. We find that, unlike in the simulation experiments, a model trained with uns causes less damage on the existing annotations as compared to cral. However, we note that the model performance from the uns annotations is much lower than cral to begin with.
We also compute the inter-annotator agreement at Iteration-1 with the expert (Linguist-3) (Table 6). We find that the agreement scores are lower than one would expect (c.f. the annotation test run on modern Greek, for which we have gold annotations, yielded much higher interannotator agreement scores over 90%). The justification probably lies with our annotators having limited knowledge of Griko grammar, while our AL methods require annotations for ambiguous and “hard” tokens. However, this is a common scenario in language documentation where often linguists are required to annotate in a language they are not very familiar with, which makes this task even more challenging. We also recorded the annotation time needed by each linguist for each iteration in Table 6. Compared with the uns method, the linguists annotated (avg.) 2.5 minutes faster using our proposed method which suggests that uns tends to select harder data instances for annotation.
Similar to the simulation experiments, we report the Wasserstein distance (WD) for all linguists in Table 6. However, unlike in the simulation setting where the WD was computed with the gold training data, for the human experiments we do not have access to the gold annotations and therefore computed WD with the gold test data which however, is from a slightly different domain, which affects the results somewhat. We observe that qbc has lower WD scores for Linguist-1 and Linguist-2 and uns for Linguist-3. On further analysis, we find that even though qbc has lower WD, it also has the least coverage of the test data—that is, it has the fewest number of annotated tokens which are present in the test data, as shown in Table 7. We would like to note that a lower WD score does not necessarily translate to better tagging accuracy because the WD metric is only informing us how good an AL strategy is in selecting data that matches closely the gold output tag distribution for that selected data.
7 Related Work
Active Learning for POS Tagging:
AL has been widely used for POS tagging. Garrette and Baldridge (2013) use a graph-based label propagation to generalize initial POS annotations to the unlabeled corpus. Further, they find that under a constrained time setting, type-level annotations prove to be more useful than token-level annotations. In line with this, Fang and Cohn (2017) also select informative word types based on uncertainty sampling for low-resource POS tagging. They also construct a tag dictionary from these type-level annotations and then propagate the labels across the entire unlabeled corpus. However, in our initial analysis on uncertainty sampling, we found that adding label-propagation harmed the accuracy in certain languages because of prevalent syncretism. Ringger et al. (2007) present different variations of uncertainty-sampling and QBC methods for POS tagging. Similar to Fang and Cohn (2017), they find uncertainty sampling with frequency bias to be the best strategy. Settles and Craven (2008) present a nice survey on the different active learning strategies for sequence labeling tasks, and Marcheggiani and Artières (2014) discuss the strategies for acquiring partially labeled data. Sener and Savarese (2018) propose a core-set selection strategy aimed at finding the subset that is competitive across the unlabeled dataset. This work is most similar to ours with respect to using geometric center points as being the most representative. However, to the best of our knowledge, none of the existing works are targeted at reducing confusion within the output classes.
Low-resource POS Tagging:
Several cross-lingual transfer techniques have been used for improving low-resource POS tagging. Cotterell and Heigold (2017) and Malaviya et al. (2018) train a joint neural model on related high-resource languages and find it be very effective on low-resource languages. The main advantage of these methods is that they do not require any parallel text or dictionaries. Das and Petrov (2011), Täckström et al. (2013), Yarowsky et al. (2001), and Nicolai and Yarowsky (2019) use annotation projection methods to project POS annotations from one language to another. However, annotation projection methods use parallel text, which often might not be of good quality for low-resource languages.
8 Conclusion
We have presented a novel active learning method for low-resource POS tagging that works by reducing confusion between output tags. Using simulation experiments across six typologically diverse languages, we show that our confusion-reducing strategy achieves higher accuracy than existing methods. Further, we test our approach under a true setting of active learning where we ask linguists to document POS information for an endangered language, Griko. Despite being unfamiliar with the language, our proposed method achieves performance gains over the other methods in most iterations. For our next steps, we plan to explore the possibility of adapting our proposed method for complete morphological analysis, which poses an even harder challenge for AL data selection due to the complexity of the task.
Acknowledgments
The authors are grateful to the anonymous reviewers and the Action Editor who took the time to provide many interesting comments that made the paper significantly better, and to Eleni Antonakaki and Irini Amanaki, for participating in the human annotation experiments. This work is sponsored by the Dr. Robert Sansom Fellowship, the Waibel Presidential Fellowship, and the National Science Foundation under grant 1761548.
Notes
More details on the experimental setup in Section §5.
Sener and Savarese (2018) describe why choosing the centroid is a good approximation of representativeness. They pose AL as a core-set selection problem where a core set is the subset of data on which the model if trained closely matches the performance of the model trained on the entire dataset. They show that finding the core set is equivalent to choosing b center points such that the largest distance between a data point and its nearest center is minimized. We take inspiration from this result in using the centroid to be the most representative instance.
We chose CVT views for qbc over the ensemble for computational reasons. Training three models independently would require three times the computation. Given that for each language we run 20 experiments, amounting to a total of 120 experiments, reducing the computational burden was preferred.
This is also noted in the UD page: https://universaldependencies.org/treebanks/uk_iu/index.html.
With Italian being the dominant language in the region, code switching phenomena appear in the Griko corpora.
References
Author notes
Work done at Carnegie Mellon University.