Abstract
We present FRMT, a new dataset and evaluation benchmark for Few-shot Region-aware Machine Translation, a type of style-targeted translation. The dataset consists of professional translations from English into two regional variants each of Portuguese and Mandarin Chinese. Source documents are selected to enable detailed analysis of phenomena of interest, including lexically distinct terms and distractor terms. We explore automatic evaluation metrics for FRMT and validate their correlation with expert human evaluation across both region-matched and mismatched rating scenarios. Finally, we present a number of baseline models for this task, and offer guidelines for how researchers can train, evaluate, and compare their own models. Our dataset and evaluation code are publicly available: https://bit.ly/frmt-task.
1 Introduction
Machine translation (MT) has made rapid advances in recent years, achieving impressive performance for many language pairs, especially those with high amounts of parallel data available. Although the MT task is typically specified at the coarse level of a language (e.g., Spanish or Hindi), some prior work has explored finer-grained distinctions, such as between regional varieties of Arabic (Zbib et al., 2012), or specific levels of politeness in German (Sennrich et al., 2016). Unfortunately, most approaches to style-targeted translation thus far rely on large, labeled training corpora (Zbib et al., 2012; Lakew et al., 2018; Costa-jussà et al., 2018; Honnet et al., 2018; Sajjad et al., 2020; Wan et al., 2020; Kumar et al., 2021), and in many cases these resources are unavailable or expensive to create.
We explore a setting for MT where unlabeled training data is plentiful for the desired language pair, but only a few parallel examples (0–100, called “exemplars”) are annotated for the target varieties. As a specific use-case, we examine translation into regional varieties: Brazilian vs. European Portuguese and Mainland vs. Taiwan Mandarin. While these varieties are mutually intelligible, they often exhibit lexical, syntactic, or orthographic differences that can negatively impact an MT user’s experience. Figure 1 illustrates the use of exemplars to control the regional variety at inference time.
MT systems that do not support region or style distinctions may be biased toward varieties with more available data (the “web-majority” varieties). We observe this bias in a widely used proprietary MT system, with measurable negative effects for speakers of web-minority varieties (§6.2). One barrier to further research on this issue is the lack of a high-quality evaluation benchmark. Thus, to encourage more access to language technologies for speakers of web-minority varieties and more equitable NLP research, we make the following contributions: (1) We construct and release FRMT, a new dataset for evaluating few-shot region-aware translation from English to Brazilian/European Portuguese and Mainland/ Taiwan Mandarin. (2) We evaluate predictions from a number of existing and custom-trained baseline systems on the FRMT task using automatic metrics. (3) We conduct detailed human evaluations of gold and model-based translations on FRMT, under all combinations of rater region and target region. (4) We analyze the correlation of automatic metrics and human evaluations on FRMT, and propose a new targeted metric for lexical accuracy.
2 Related Work
Textual style transfer aims to control fine-grained stylistic features of generated text. Earlier work leverages supervised parallel data (Jhamtani et al., 2017); later work assumes labeled but non-parallel training data (Shen et al., 2017; Li et al., 2018; Niu et al., 2018a), or foregoes training-time labels entirely, as in our setting, relying only on few-shot exemplars provided at inference time (Xu et al., 2020; Riley et al., 2021; Garcia et al., 2021). However, style transfer evaluation protocols are known to be lacking (Pang and Gimpel, 2019; Briakou et al., 2021; Hu et al., 2022), due to the underspecification of stylistic attributes (e.g., formality, sentiment) and the absence of standardization across studies. Region-aware translation addresses these issues, providing a test-bed for exploring few-shot attribute control—MT evaluation methods are relatively mature, and many regional language varieties can be sufficiently delineated for the task.
Previous work has explored many sub-types of variety-targeted MT. Region-aware MT targets specific regions or dialects (Zbib et al., 2012; Costa-jussà et al., 2018; Honnet et al., 2018; Lakew et al., 2018; Sajjad et al., 2020; Wan et al., 2020; Kumar et al., 2021; formality-aware MT targets different formality levels (Niu et al., 2017, 2018b; Wang et al., 2019); and personalized MT aims to match an individual’s specific style (Michel and Neubig, 2018; Vincent, 2021). However, with few exceptions (e.g., Garcia et al., 2021), these works assume the availability of large-scale datasets containing examples with the target varieties explicitly labeled. In the present work, we design a benchmark that emphasizes few-shot adaptability. Although our dataset is limited to four regions and two languages, the few-shot setup and high degree of linguistic dissimilarity between the selected languages means that approaches performing well on the entire FRMT benchmark can be expected to generalize reasonably well to other languages, other regions, and other stylistic attributes.
Several existing parallel corpora cover regional language varieties, but have limitations that motivate us to construct a new high-quality, targeted dataset. e-PACT (Barreiro and Mota, 2017) comprises translations from English books into Portuguese variants, but is small and not easily accessible. OpenSubTitles (Lison et al., 2018) skews toward shorter utterances and is noisy due to automatic alignment. WIT3 (Cettolo et al., 2012) provides translations of TED-talk transcripts into many languages, but relies on volunteer translators, which may limit quality.
Popular shared tasks have not included region-targeted translation either: The Conference on Machine Translation (WMT) has included translation between similar languages (e.g., Akhbardeh et al., 2021), while the Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial) focuses mainly on classification and not translation (e.g., Zampieri et al., 2021).
Furthermore, we are not aware of previous work that (1) measures deltas in human evaluation metrics between the region-matched and region-mismatched settings, (2) correlates these with automated metrics, (3) offers tailored sub-tasks targeting region-differentiated lexical items and region-biased distractors, or (4) defines targeted metrics testing region-appropriateness.
3 FRMT Dataset
We introduce the FRMT dataset for evaluating the quality of few-shot region-aware machine translation. The dataset covers two regions each for Portuguese (Brazil and Portugal) and Mandarin (Mainland and Taiwan). These languages and varieties were selected for multiple reasons: (1) They have many speakers who can benefit from increased regional support in NLP. (2) Portuguese and Mandarin are linguistically very distinct, coming from different families; we therefore hypothesize that methods that perform well on both are more likely to generalize well to other languages. The dataset was created by sampling English sentences from Wikipedia and acquiring professional human translations in the target regional varieties. Final quality verification is done through manual evaluation by an independent set of translators, using the MQM protocol (Freitag et al., 2021a) that we also employ to evaluate system translation quality.
3.1 Data Sampling Method
FRMT seeks to capture region-specific linguistic differences, as well as potential distractors. To this end, we divide the dataset into three buckets (lexical, entity, random), each containing human translations of sentences extracted from different sets of English Wikipedia articles.1
Lexical:
We collect English lexical items for which the best translation into the target language differs depending on the target region. To source these, we rely on blogs and educational websites that list terms differing by region. We further validate each pair of translations by asking a native speaker of each region whether each translation is appropriate for the intended meaning in their region. We filter to only use pairs where exactly one translation is appropriate per region. This is done independently for Portuguese and Mandarin as target languages, yielding lists of 23 and 15 terms, respectively. For each term t, we extract up to 100 sentences from the beginning of the English Wikipedia article with title t.
Entity:
We select entities that are strongly associated with specific regions under consideration (e.g., Lisbon and São Paulo), which may have adversarial effects for models that rely heavily on correlations learned from pretraining. Our selection comprises 38 Mandarin-focused and 34 Portuguese-focused entities. We extract up to 100 source sentences from the beginning of the English Wikipedia article about each selected entity.
Random:
For a more naturally distributed subset, we randomly sample 100 articles from Wikipedia’s collections of “featured” or “good” articles.2 Here, we take up to 20 sentences from the start of a randomly chosen section within each article. Unlike the other two buckets, this one features one common set of sentences to be translated into all four target variants.
3.2 Human Translation
Fourteen paid professionals translated the selected English texts into the four target language variants: 3 translators per Portuguese region and 4 per Mandarin region. For each region, each sentence was translated by one translator, resulting in one reference per source. Each translator translated non-overlapping chunks of the source data one sentence at a time in the order of the original text. Sentences that were rejected by at least one translator (e.g., for having too much non-English text) are not included in our dataset.
3.3 Corpus Statistics
For each bucket, we split our data into exemplar, development (dev), and test data. The exemplars are intended to be the only pairs where the region label is shown to the model, such as via few-shot or in-context learning (Brown et al., 2020). Providing these ensures increased comparability across methods on the FRMT benchmark, in addition to sidestepping potential domain mismatch issues by providing exemplars from the same domain (Wikipedia text) as the evaluation data.
Table 1 reports the number of released sentence pairs for each split of the dataset. Sentences from a given Wikipedia page appear only in a single split, ensuring a system cannot “cheat” by memorizing word–region associations from the exemplars, or by overfitting to words and entities while hill-climbing on the dev set.
Bucket . | Split . | Portuguese . | Mandarin . |
---|---|---|---|
Lexical | Exemplar | 118 | 173 |
Dev | 848 | 524 | |
Test | 874 | 538 | |
Entity | Exemplar | 112 | 104 |
Dev | 935 | 883 | |
Test | 985 | 932 | |
Random | Exemplar | 111 | 111 |
Dev | 744 | 744 | |
Test | 757 | 757 | |
Total | Exemplar | 341 | 388 |
Dev | 2527 | 2151 | |
Test | 2616 | 2227 |
Bucket . | Split . | Portuguese . | Mandarin . |
---|---|---|---|
Lexical | Exemplar | 118 | 173 |
Dev | 848 | 524 | |
Test | 874 | 538 | |
Entity | Exemplar | 112 | 104 |
Dev | 935 | 883 | |
Test | 985 | 932 | |
Random | Exemplar | 111 | 111 |
Dev | 744 | 744 | |
Test | 757 | 757 | |
Total | Exemplar | 341 | 388 |
Dev | 2527 | 2151 | |
Test | 2616 | 2227 |
Table 2 shows example items from each bucket.
Bucket . | pt-BR . | pt-PT . |
---|---|---|
lexical | Em 2019, a Virgin Atlantic começou a permitir que suas comissárias de bordo femininas usassem calças e não usassem maquiagem. | Em 2019, a Virgin Atlantic começou a autorizar as assistentes de bordo a usar calças e a dispensar maquilhagem. |
In 2019, Virgin Atlantic began to allow its female flight attendants to wear pants and not wear makeup. | ||
entity | Os ônibus são o meio mais barato de se movimentar por Natal. | Os autocarros são a maneira mais barata de viajar pelas localidades próximas de Natal. |
Buses are the cheapest way to move around Natal. | ||
random | O suco causa alucinações psicodélicas intensas em quem o bebe, e a polícia logo o rastreou até a fazenda e partiu para prender Homer, Seth e Munchie. | O sumo provoca fortes alucinações psicadélicas a quem bebe do mesmo e a polícia rapidamente segue o rasto até à quinta, deslocando-se até lá para prender Homer, Seth e Munchie. |
The juice causes intense psychedelic hallucinations in those who drink it, and the police quickly trace it to the farm and move in to arrest Homer, Seth, and Munchie. |
Bucket . | pt-BR . | pt-PT . |
---|---|---|
lexical | Em 2019, a Virgin Atlantic começou a permitir que suas comissárias de bordo femininas usassem calças e não usassem maquiagem. | Em 2019, a Virgin Atlantic começou a autorizar as assistentes de bordo a usar calças e a dispensar maquilhagem. |
In 2019, Virgin Atlantic began to allow its female flight attendants to wear pants and not wear makeup. | ||
entity | Os ônibus são o meio mais barato de se movimentar por Natal. | Os autocarros são a maneira mais barata de viajar pelas localidades próximas de Natal. |
Buses are the cheapest way to move around Natal. | ||
random | O suco causa alucinações psicodélicas intensas em quem o bebe, e a polícia logo o rastreou até a fazenda e partiu para prender Homer, Seth e Munchie. | O sumo provoca fortes alucinações psicadélicas a quem bebe do mesmo e a polícia rapidamente segue o rasto até à quinta, deslocando-se até lá para prender Homer, Seth e Munchie. |
The juice causes intense psychedelic hallucinations in those who drink it, and the police quickly trace it to the farm and move in to arrest Homer, Seth, and Munchie. |
3.4 Limitations
Our dataset is designed to capture differences in regional varieties, but capturing all such differences in a finite dataset is impossible. While we specifically target lexical differences, the terms were selected via a manual process based on online resources that discuss lexical differences in these languages, and these resources can sometimes be incorrect, outdated, or inattentive to rare words or words with more subtle differences. Other regional distinctions, such as grammatical differences, were not specifically targeted by our data bucketing process, and thus the degree to which they are captured by the dataset is determined by their likelihood to occur in translations of English Wikipedia text. This also means that differences that only surface in informal settings are unlikely to be included, as Wikipedia text has a generally formal style.
While we believe that methods that perform well on all four varieties included in FRMT should be applicable to other languages and varieties, measuring this would require a similar dataset with wider coverage. Constructing such a dataset requires only knowledge of regional differences to inform selection of source texts as in our lexical and entity buckets, and translators who are native speakers of the target varieties. An additional pool of MQM-trained translators would be needed to validate the collected translations for regional distinctiveness.
In spite of validation through MQM, it should be noted that the region-targeted translations we collected are not necessarily minimal contrastive pairs, but may include differences arising from factors other than regional variation, such as individual style preferences of the human translators.
4 Evaluation Metrics
While human judgments are the gold standard for evaluating machine-generated texts, collecting them can be time-consuming and expensive. For faster iteration, it can be helpful to measure progress against automatic metrics that are known to correlate well with human judgments. We hypothesize that common reference-based MT evaluation metrics might have differing sensitivities to regional differences, and so we conduct a human evaluation of several baseline models (see §6.1) and compute correlation of several automatic metrics with the human judgments. We also propose a new automated lexical accuracy metric that more directly targets region-awareness.
4.1 Human Evaluation
To obtain the highest fidelity human ratings, we use the expert-based Multidimensional Quality Metrics (MQM) evaluation framework proposed by Freitag et al. (2021a) and recommended by the WMT’21 Evaluation Campaign (Freitag et al., 2021b). We show expert raters chunks of 10 contiguous English sentences from our test set with one corresponding set of translations. Raters then identify errors in the translations, assigning a category and severity to each. Due to cost constraints, we evaluate 25% of the test set, evenly distributed across our three evaluation buckets. Within each region, each chunk is rated by three raters, who achieve interannotator consistency of 70.4 ± 2.2 (as 100-scaled intraclass correlation3).
Each output is shown to raters of both regions of the corresponding language. All Mandarin outputs are automatically converted into the rater’s region’s corresponding Han script (Mainland: simplified; Taiwan: traditional), using Google Translate “Chinese (Simplified)” “Chinese (Traditional)”, which as of March 2023 converts between these regions using only basic script conversion rules.
4.2 Automatic Translation Quality Metrics
We evaluate the following automatic, reference- based metrics:
BLEURT (Sellam et al., 2020): A learned, model-based metric that has good correlation with human judgments of translation quality. To the best of our knowledge, BLEURT has not been evaluated with respect to human judgments of region-specific translation quality.
BLEURT-D{3,6,12} (Sellam et al., 2020): These distilled versions of BLEURT are less resource-intensive to run, and have 3, 6, and 12 layers, respectively. For all BLEURT variants, we use checkpoints released by its authors.
As in the human evaluation, all Mandarin outputs are converted into the target regional Han script before evaluation.
4.3 Correlation
For computing correlation, each data point is a score on a 10-sentence chunk of model output, covering the three models discussed in section §6.1, using both matched and mismatched ratings. For MQM, this is the average of 30 weighted ratings: one per sentence per rater. The category/ severity weights are described in Freitag et al. (2021a). For BLEU and chrF, which are corpus-level metrics, we take the 10 input/output sentence pairs as the “corpus”. For BLEURT, we use the average sentence-level score. Table 3 presents the correlation results, scaled by −100.6
Metric . | Kendall’sτ . | Pearson’sρ . |
---|---|---|
chrF | 43.6 | 48.4 |
BLEU | 44.9 | 57.5 |
BLEURT-D3 | 50.6 | 63.1 |
BLEURT-D6 | 50.7 | 63.3 |
BLEURT-D12 | 51.2 | 64.0 |
BLEURT | 52.4 | 65.4 |
Metric . | Kendall’sτ . | Pearson’sρ . |
---|---|---|
chrF | 43.6 | 48.4 |
BLEU | 44.9 | 57.5 |
BLEURT-D3 | 50.6 | 63.1 |
BLEURT-D6 | 50.7 | 63.3 |
BLEURT-D12 | 51.2 | 64.0 |
BLEURT | 52.4 | 65.4 |
We observe that the learned BLEURT metrics outperform the non-learned metrics by wide margins, in line with findings from Sellam et al. (2020) that neural methods outperform n-gram based methods. Additionally, the teacher model (BLEURT) outperforms the distilled student models, with larger students consistently outperforming smaller ones.
4.4 Lexical Accuracy
To assess a model’s ability to select lexical forms appropriate to the target region, we define a lexical accuracy metric. As discussed in section §3.1, sentences in the lexical bucket are from Wikipedia articles containing specific words that we expect to have distinct regional translations. For instance, we include source sentences from the English Wikipedia article “Bus” in the Portuguese lexical bucket, as the word for bus is distinct in Brazil and Portugal (ônibus vs. autocarro). As the expected output words are known ahead of time, we can directly measure the rate at which a model selects region-appropriate variants.
Starting from the list of terms used to select articles for the lexical bucket, we remove the terms selected for the exemplars split in order to test generalization to unseen terms. This results in 18 term-pairs in Portuguese and 13 in Mandarin.
To account for Portuguese inflection, we considered matching lemmatized forms rather than surface forms, but found little difference in the resulting scores. We thus report results using naive surface matching, which avoids a dependency on a specific lemmatizer and improves reproducibility.
To disentangle lexical choice from script choice, we define lexical accuracy to be script-agnostic—e.g., for the word pineapple, if the target is zh-TW, we count both script forms of the Taiwan variant fènglí ( and ) as correct, and both script forms of the Mainland variant bōluó ( and ) as incorrect. This ensures that models are judged solely on their lexical choices, and prevents “gaming” the metric by only using the lexical forms and script of a single region.
We emphasize that lexical choice is just one important facet of region-aware translation, aside from morphology, syntax, and beyond. Even so, we believe that this easy-to-calculate metric is worth iterating on, since one may safely say that a model that scores poorly on lexical accuracy has not solved region-aware translation.
4.5 Reporting FRMT Results
For the FRMT task (as opposed to the dataset), we stipulate a key “few-shot” restriction: candidate models may not be intentionally exposed to any region-labeled data at any point during training, except for data from the FRMT exemplars split. This restriction covers both region-labeled monolingual data as well as region-labeled parallel translation data.7 While it may not be difficult to obtain region labels for Brazil/Portugal or Mainland/Taiwan (e.g., by filtering web pages on top-level web domain), we intend for FRMT to serve as a measure of few-shot generalization to arbitrary regions and language varieties, for which obtaining labels may be much harder.
Researchers sharing FRMT results should report lexical accuracy, per-bucket BLEU, and the “FRMT” score (described in §6.2) on test, as shown in Tables 4 and 5. These metrics can be calculated with our provided evaluation scripts.8
. | Lexical . | Entity . | Random . | FRMT . | |||
---|---|---|---|---|---|---|---|
Model . | pt-BR . | pt-PT . | pt-BR . | pt-PT . | pt-BR . | pt-PT . | pt . |
UR | 37.4 (69.9) | 32.7 (68.0) | 46.7 (76.3) | 40.8 (73.6) | 39.8 (70.7) | 35.3 (69.2) | 38.7 (71.3) |
M4-UR | 46.7 (74.5) | 32.7 (69.7) | 53.5 (79.9) | 45.4 (77.5) | 43.1 (70.9) | 32.9 (68.4) | 42.0 (73.5) |
M4-Prompts | 54.1 (77.1) | 36.9 (72.1) | 56.9 (81.1) | 47.3 (78.4) | 56.1 (77.5) | 41.0 (73.7) | 48.2 (76.6) |
M4-Prompts FT | 45.5 (70.1) | 32.5 (67.4) | 48.6 (73.8) | 40.7 (72.8) | 48.1 (70.5) | 36.9 (69.0) | 41.7 (70.6) |
PaLM 8B | 38.6 (69.8) | 26.7 (65.8) | 45.9 (75.9) | 38.0 (73.6) | 39.3 (69.4) | 32.1 (67.8) | 36.5 (70.4) |
PaLM 62B | 49.5 (75.9) | 36.7 (72.4) | 55.4 (80.1) | 46.1 (77.8) | 50.3 (75.2) | 41.5 (73.5) | 46.3 (75.8) |
PaLM 540B | 53.7 (77.1) | 40.1 (73.9) | 59.0 (81.2) | 49.5 (79.0) | 54.8 (76.9) | 45.6 (75.5) | 50.2 (77.3) |
Google Translate | 56.2 (78.7) | 35.6 (72.3) | 56.3 (81.2) | 46.9 (78.3) | 65.2 (80.5) | 42.9 (75.0) | 49.8 (77.6) |
zh-CN | zh-TW | zh-CN | zh-TW | zh-CN | zh-TW | zh | |
UR | 22.6 (58.5) | 13.8 (56.0) | 26.7 (67.1) | 19.5 (65.3) | 26.4 (62.1) | 20.4 (61.0) | 21.3 (61.7) |
M4-UR | 33.3 (65.0) | 18.9 (58.2) | 43.2 (73.0) | 31.4 (70.4) | 40.8 (65.4) | 30.8 (63.6) | 32.5 (65.9) |
M4-Prompts | 33.3 (64.9) | 18.3 (57.6) | 44.2 (72.5) | 32.0 (68.7) | 43.7 (67.0) | 32.2 (63.4) | 33.3 (65.6) |
M4-Prompts FT | 33.8 (65.7) | 18.8 (59.0) | 44.8 (73.2) | 31.6 (69.8) | 42.7 (66.7) | 31.5 (64.0) | 33.2 (66.4) |
PaLM 8B | 17.6 (55.7) | 13.3 (52.3) | 28.1 (65.7) | 24.4 (63.9) | 21.6 (56.3) | 18.2 (56.1) | 20.4 (58.3) |
PaLM 62B | 29.2 (62.2) | 20.4 (59.8) | 40.2 (71.8) | 33.0 (69.9) | 34.5 (64.0) | 26.0 (63.1) | 30.3 (65.1) |
PaLM 540B | 34.8 (66.5) | 24.6 (63.3) | 44.9 (74.7) | 35.2 (72.5) | 40.0 (67.8) | 29.6 (66.0) | 34.5 (68.4) |
Google Translate | 39.7 (68.0) | 21.9 (61.8) | 50.4 (75.0) | 37.0 (72.2) | 56.1 (72.0) | 39.9 (68.7) | 40.1 (69.6) |
. | Lexical . | Entity . | Random . | FRMT . | |||
---|---|---|---|---|---|---|---|
Model . | pt-BR . | pt-PT . | pt-BR . | pt-PT . | pt-BR . | pt-PT . | pt . |
UR | 37.4 (69.9) | 32.7 (68.0) | 46.7 (76.3) | 40.8 (73.6) | 39.8 (70.7) | 35.3 (69.2) | 38.7 (71.3) |
M4-UR | 46.7 (74.5) | 32.7 (69.7) | 53.5 (79.9) | 45.4 (77.5) | 43.1 (70.9) | 32.9 (68.4) | 42.0 (73.5) |
M4-Prompts | 54.1 (77.1) | 36.9 (72.1) | 56.9 (81.1) | 47.3 (78.4) | 56.1 (77.5) | 41.0 (73.7) | 48.2 (76.6) |
M4-Prompts FT | 45.5 (70.1) | 32.5 (67.4) | 48.6 (73.8) | 40.7 (72.8) | 48.1 (70.5) | 36.9 (69.0) | 41.7 (70.6) |
PaLM 8B | 38.6 (69.8) | 26.7 (65.8) | 45.9 (75.9) | 38.0 (73.6) | 39.3 (69.4) | 32.1 (67.8) | 36.5 (70.4) |
PaLM 62B | 49.5 (75.9) | 36.7 (72.4) | 55.4 (80.1) | 46.1 (77.8) | 50.3 (75.2) | 41.5 (73.5) | 46.3 (75.8) |
PaLM 540B | 53.7 (77.1) | 40.1 (73.9) | 59.0 (81.2) | 49.5 (79.0) | 54.8 (76.9) | 45.6 (75.5) | 50.2 (77.3) |
Google Translate | 56.2 (78.7) | 35.6 (72.3) | 56.3 (81.2) | 46.9 (78.3) | 65.2 (80.5) | 42.9 (75.0) | 49.8 (77.6) |
zh-CN | zh-TW | zh-CN | zh-TW | zh-CN | zh-TW | zh | |
UR | 22.6 (58.5) | 13.8 (56.0) | 26.7 (67.1) | 19.5 (65.3) | 26.4 (62.1) | 20.4 (61.0) | 21.3 (61.7) |
M4-UR | 33.3 (65.0) | 18.9 (58.2) | 43.2 (73.0) | 31.4 (70.4) | 40.8 (65.4) | 30.8 (63.6) | 32.5 (65.9) |
M4-Prompts | 33.3 (64.9) | 18.3 (57.6) | 44.2 (72.5) | 32.0 (68.7) | 43.7 (67.0) | 32.2 (63.4) | 33.3 (65.6) |
M4-Prompts FT | 33.8 (65.7) | 18.8 (59.0) | 44.8 (73.2) | 31.6 (69.8) | 42.7 (66.7) | 31.5 (64.0) | 33.2 (66.4) |
PaLM 8B | 17.6 (55.7) | 13.3 (52.3) | 28.1 (65.7) | 24.4 (63.9) | 21.6 (56.3) | 18.2 (56.1) | 20.4 (58.3) |
PaLM 62B | 29.2 (62.2) | 20.4 (59.8) | 40.2 (71.8) | 33.0 (69.9) | 34.5 (64.0) | 26.0 (63.1) | 30.3 (65.1) |
PaLM 540B | 34.8 (66.5) | 24.6 (63.3) | 44.9 (74.7) | 35.2 (72.5) | 40.0 (67.8) | 29.6 (66.0) | 34.5 (68.4) |
Google Translate | 39.7 (68.0) | 21.9 (61.8) | 50.4 (75.0) | 37.0 (72.2) | 56.1 (72.0) | 39.9 (68.7) | 40.1 (69.6) |
Model . | pt . | zh . |
---|---|---|
Gold | 98.6 | 94.4 |
UR | 50.4 | 50.6 |
M4-UR | 51.2 | 50.9 |
M4-Prompts | 66.7 | 50.0 |
M4-Prompts FT | 66.7 | 51.0 |
PaLM 8B | 85.0 | 69.0 |
PaLM 62B | 90.4 | 70.8 |
PaLM 540B | 93.2 | 83.6 |
Google Translate | 50.0 | 50.0 |
Model . | pt . | zh . |
---|---|---|
Gold | 98.6 | 94.4 |
UR | 50.4 | 50.6 |
M4-UR | 51.2 | 50.9 |
M4-Prompts | 66.7 | 50.0 |
M4-Prompts FT | 66.7 | 51.0 |
PaLM 8B | 85.0 | 69.0 |
PaLM 62B | 90.4 | 70.8 |
PaLM 540B | 93.2 | 83.6 |
Google Translate | 50.0 | 50.0 |
We also recommend reporting BLEURT scores, but recognize that this may not always be possible, as it requires significantly more computational resources. Similarly, we encourage human evaluation using MQM as a gold standard, but do not wish to promote this as a community metric, due to its impracticality for many researchers and the potential confound of having different rater pools.
Finally, for any model candidate, it is important to report how many exemplars were supplied for each variety. To improve comparability, we recommend 0, 10, or 100 exemplars per region.
5 Baseline Models
We evaluate a handful of academic MT models that claim some ability to provide few-shot controllable translations. We also evaluate a commercial MT system that does not distinguish between these regional varieties.
Our first baseline is the Universal Rewriter (UR) of Garcia et al. (2021), which supports multilingual style transfer and translation. It is initialized from an mT5-XL checkpoint (Xue et al., 2021) and finetuned on a combination of monolingual and parallel data from mC4 and OPUS, respectively. We train it with sequence length of 128 instead of 64, to be directly comparable to our other models.
Our second baseline is UR finetuned from the Massively Multilingual Massive Machine translation (M4) model of Siddhant et al. (2022) instead of mT5 (M4-UR). We hypothesize that initializing from a model explicitly designed for translation will outperform one trained as a general language model. For both UR and M4-UR, we use the first 100 exemplars from the lexical buckets.
Our third baseline uses natural language prompting to control the regional variety of M4’s output (M4-Prompts), such as prefixing the input with “A Brazilian would write it like this:”. This is motivated by earlier work using this technique effectively for large language models (Wei et al., 2022; Sanh et al., 2022; Brown et al., 2020), and more recent work applying it to region-aware MT (Garcia and Firat, 2022).
Our fourth baseline finetunes the M4-Prompts model, where the source-side language tags used to induce the target language are replaced with prompts of the form “Translate to [language]:”. This model (M4-Prompts FT) is designed to explicitly introduce prompting behavior. At inference time, we replace “[language]” with the variety name (e.g., “Brazilian Portuguese”). Neither M4-Prompts nor M4-Prompts FT use exemplars.
Our next three baselines are different-sized versions of PaLM (Chowdhery et al., 2022), a large language model that has demonstrated remarkable zero-shot and few-shot performance on a variety of tasks (PaLM 540B, PaLM 62B, and PaLM 8B, referring to their approximate parameter counts). The prompt for these models begins with “Translate the following texts from English to [language variety]” and is followed by ten exemplars selected randomly from the lexical bucket.9 Each exemplar is put on two lines: first the English text, prefixed by “English:”, and then the translation in the target variety, prefixed by the variety’s name. At the end of the prompt, we show the model the input text and the language variety prefix, and take the first decoded line of text.
Finally, we examine Google Translate,10 a publicly available commercial MT model that does not support regional varieties for Portuguese or Mandarin (though it does support conversion between traditional and simplified scripts). We evaluate this system mainly to test the hypothesis that variety-agnostic systems will be biased toward the web-majority variety.
6 Baseline Model Performance
6.1 Human Evaluation Results
We select three baseline models for human evaluation: M4-UR, M4-Prompts, and PaLM 540B, covering a variety of modeling techniques.
Figure 2 presents human evaluation of our baselines on the 25% sample of our test set described in §4.2. For the gold data, we observe that raters of all regions prefer translations from their own region (the “matched” case) over translations from the other region (the “mismatched” case) in all three buckets; when averaged over buckets, the MQM penalties for the matched and mismatched cases are significantly different (1.73 matched and 3.55 mismatched; t = −3.34; p < 0.001). This indicates that, despite the limitations discussed in §3.4, our data collection process succeeded in producing regionally distinct translations. This effect is strongest in the lexical bucket, presumably due to the high rate of region-distinct terms in these sentences.
In Portuguese, we find that all models perform better in the region-matched setting, indicating that each model has some ability to localize to Brazil and Portugal. However, in Mandarin, apart from PaLM’s lexical bucket, region match does not lead to MQM gains, indicating that these models are not able to produce better, more region-specific translations in this case.
Comparing across models, we find that PaLM performs the best, followed by M4-Prompts and then M4-UR, consistently across both Portuguese and Mandarin. PaLM performs particularly well in the lexical bucket, suggesting that larger models may be better suited to the task of memorizing region-specific lexical variants.
For Mandarin, a large gap remains between expert translations and our baselines: Averaged over buckets, the gold matched MQM penalty is 2.5 vs. PaLM’s 8.8. It’s apparent that better region handling will be needed to close this gap, since our baselines have much worse match/mismatch deltas than gold translations: The average gold mismatched penalty minus matched penalty was 2.7, while PaLM’s was −0.3.
For Portuguese, while PaLM gives impressive results, there is still a meaningful gap with expert translation: Averaged over buckets, the gold MQM penalty was 2.1 vs. PaLM’s 2.7, indicating headroom for our task. There is also the important question of whether competitive performance can be achieved with smaller models, which are better suited for production use-cases.
Figure 3 breaks down scores by rater and target region, over the full 25% sample. As before, in each setting, raters prefer region-matched over mismatched gold translations. For Portuguese, we find that our pt-PT raters were “harder graders” than our pt-BR raters, with a delta of +2 MQM between the regions in both matched and mismatched settings; by contrast, our Mandarin raters were well calibrated across regions.
We further examined model performance on the entity bucket, to test whether the presence of “distractor” entities (associated with the non-target region) would hurt translation quality, but we did not find significant differences in MQM scores. Still, we note isolated examples of this effect; for instance, when targeting pt-BR, the M4-Prompts model produces the pt-PT spelling património (cf. pt-BR patrimônio), but only when the English source contains the words Lisbon or Portugal. We expect the entity bucket will be useful to researchers looking for similar effects.
6.2 Automated Metric Results
Table 4 shows performance of our baseline models on the automated metrics BLEU and BLEURT. “FRMT” score is a summary of per-language performance, calculated as the geometric mean across regions of the arithmetic mean across buckets.
As mentioned at the outset, we observe that region-agnostic models have a strong bias toward the region with larger presence in web-crawled corpora. This is especially apparent in the lexical bucket, where Google Translate has a +20.6 BLEU gap between pt-BR and pt-PT and a +17.8 gap between zh-CN and zh-TW.
Within the lexical bucket, we note that PaLM outperforms the public Google Translate model in web-minority regions (pt-PT and zh-TW) despite being trained in a fully unsupervised manner. This highlights that even with minimal region-labeled data (10 exemplars), it is possible to make meaningful progress over region-agnostic approaches.
Table 5 shows lexical accuracy performance, assessing whether specific terms receive region-appropriate translations. Here, the PaLM models outperform alternatives by a wide margin. As even the smallest PaLM model has more than 2 × the parameters of our other baselines (3.7B parameters each), this suggests that model capacity is a key ingredient for learning to use region-specific terminology in a few-shot manner. Still, there is a wide gap compared to human performance.
Notably, while the smaller PaLM models outperform our UR and M4 baselines on lexical accuracy, they underperform on BLEU and BLEURT. This highlights that using region-appropriate terminology is only a small part of the translation task, and at smaller sizes, models designed specifically for translation have the advantage.
6.3 Mismatched Outputs
Given a reference in a specified language variety (e.g., pt-PT), a “good” model should achieve a higher score when translating into that variety (the “matched” case) than an alternative variety (e.g., pt-BR; the “mismatched” case). To measure the extent to which this holds for our baseline models, we show the delta between matched and mismatched outputs on the test set in Table 6.
. | Lexical . | Entity . | Random . | ΔFRMT . | |||
---|---|---|---|---|---|---|---|
Model . | pt-BR . | pt-PT . | pt-BR . | pt-PT . | pt-BR . | pt-PT . | pt . |
UR | 1.3 (1.0) | −0.2 (−0.8) | 1.0 (0.8) | −1.0 (−0.8) | 1.5 (0.8) | −0.7 (−0.7) | 0.3 (0.0) |
M4-UR | 1.0 (−0.2) | −0.1 (0.4) | 0.2 (0.0) | 0.1 (0.1) | 0.9 (−0.2) | −0.5 (0.4) | 0.2 (0.1) |
M4-Prompts | 3.6 (1.9) | 2.2 (0.7) | 1.8 (0.5) | 0.9 (0.2) | 2.4 (1.2) | 0.5 (−0.4) | 1.9 (0.6) |
M4-Prompts FT | 3.2 (−0.1) | 1.9 (2.2) | 1.5 (−1.0) | 0.8 (1.4) | 2.0 (−0.7) | 0.5 (1.3) | 1.6 (0.5) |
PaLM 8B | 6.5 (2.2) | 1.7 (1.0) | 4.6 (0.8) | 0.7 (0.4) | 4.3 (0.9) | 0.5 (0.1) | 2.8 (0.9) |
PaLM 62B | 13.1 (4.0) | 5.2 (2.7) | 9.6 (1.7) | 2.2 (1.1) | 8.0 (1.2) | 2.7 (0.9) | 6.5 (1.9) |
PaLM 540B | 13.8 (4.1) | 7.0 (3.2) | 9.7 (1.7) | 4.0 (1.5) | 9.1 (1.4) | 3.9 (1.5) | 7.7 (2.2) |
zh-CN | zh-TW | zh-CN | zh-TW | zh-CN | zh-TW | zh | |
UR | 1.0 (−0.1) | −0.4 (0.2) | 1.0 (0.5) | −0.3 (−0.4) | 1.8 (1.1) | −0.9 (−0.8) | 0.2 (0.1) |
M4-UR | −0.1 (−0.3) | 0.2 (0.3) | 0.3 (0.1) | 0.0 (−0.1) | −0.1 (−0.1) | −0.1 (0.2) | 0.0 (0.0) |
M4-Prompts | 0.6 (1.6) | −0.5 (−1.8) | 1.3 (1.2) | −0.5 (−1.2) | 1.3 (2.0) | −0.7 (−1.4) | 0.1 (0.0) |
M4-Prompts FT | 1.5 (1.0) | −0.7 (−0.9) | 2.0 (0.8) | −1.2 (−0.8) | 1.6 (1.0) | −1.2 (−1.1) | 0.1 (0.0) |
PaLM 8B | 2.0 (1.1) | 1.9 (0.7) | 3.0 (1.0) | 0.2 (−0.9) | 2.4 (0.4) | 1.1 (0.2) | 1.7 (0.4) |
PaLM 62B | 5.9 (1.7) | 3.5 (1.5) | 4.3 (0.8) | 1.2 (0.0) | 5.9 (1.1) | 0.2 (0.6) | 3.3 (0.9) |
PaLM 540B | 9.8 (4.2) | 4.7 (1.8) | 6.4 (1.4) | 0.5 (0.0) | 9.0 (2.0) | −0.5 (0.4) | 4.7 (1.6) |
. | Lexical . | Entity . | Random . | ΔFRMT . | |||
---|---|---|---|---|---|---|---|
Model . | pt-BR . | pt-PT . | pt-BR . | pt-PT . | pt-BR . | pt-PT . | pt . |
UR | 1.3 (1.0) | −0.2 (−0.8) | 1.0 (0.8) | −1.0 (−0.8) | 1.5 (0.8) | −0.7 (−0.7) | 0.3 (0.0) |
M4-UR | 1.0 (−0.2) | −0.1 (0.4) | 0.2 (0.0) | 0.1 (0.1) | 0.9 (−0.2) | −0.5 (0.4) | 0.2 (0.1) |
M4-Prompts | 3.6 (1.9) | 2.2 (0.7) | 1.8 (0.5) | 0.9 (0.2) | 2.4 (1.2) | 0.5 (−0.4) | 1.9 (0.6) |
M4-Prompts FT | 3.2 (−0.1) | 1.9 (2.2) | 1.5 (−1.0) | 0.8 (1.4) | 2.0 (−0.7) | 0.5 (1.3) | 1.6 (0.5) |
PaLM 8B | 6.5 (2.2) | 1.7 (1.0) | 4.6 (0.8) | 0.7 (0.4) | 4.3 (0.9) | 0.5 (0.1) | 2.8 (0.9) |
PaLM 62B | 13.1 (4.0) | 5.2 (2.7) | 9.6 (1.7) | 2.2 (1.1) | 8.0 (1.2) | 2.7 (0.9) | 6.5 (1.9) |
PaLM 540B | 13.8 (4.1) | 7.0 (3.2) | 9.7 (1.7) | 4.0 (1.5) | 9.1 (1.4) | 3.9 (1.5) | 7.7 (2.2) |
zh-CN | zh-TW | zh-CN | zh-TW | zh-CN | zh-TW | zh | |
UR | 1.0 (−0.1) | −0.4 (0.2) | 1.0 (0.5) | −0.3 (−0.4) | 1.8 (1.1) | −0.9 (−0.8) | 0.2 (0.1) |
M4-UR | −0.1 (−0.3) | 0.2 (0.3) | 0.3 (0.1) | 0.0 (−0.1) | −0.1 (−0.1) | −0.1 (0.2) | 0.0 (0.0) |
M4-Prompts | 0.6 (1.6) | −0.5 (−1.8) | 1.3 (1.2) | −0.5 (−1.2) | 1.3 (2.0) | −0.7 (−1.4) | 0.1 (0.0) |
M4-Prompts FT | 1.5 (1.0) | −0.7 (−0.9) | 2.0 (0.8) | −1.2 (−0.8) | 1.6 (1.0) | −1.2 (−1.1) | 0.1 (0.0) |
PaLM 8B | 2.0 (1.1) | 1.9 (0.7) | 3.0 (1.0) | 0.2 (−0.9) | 2.4 (0.4) | 1.1 (0.2) | 1.7 (0.4) |
PaLM 62B | 5.9 (1.7) | 3.5 (1.5) | 4.3 (0.8) | 1.2 (0.0) | 5.9 (1.1) | 0.2 (0.6) | 3.3 (0.9) |
PaLM 540B | 9.8 (4.2) | 4.7 (1.8) | 6.4 (1.4) | 0.5 (0.0) | 9.0 (2.0) | −0.5 (0.4) | 4.7 (1.6) |
We observe that in the Portuguese case, most models do score better when asked to produce text in the same regional variety as the reference. However, when it comes to Mandarin, most models—PaLM being the exception—struggle to produce zh-TW output that outperforms their zh-CN output when evaluated against a zh-TW reference, indicating that the attempts to appropriately stylize the generated text degrade its quality more than they improve its regional acceptability.
6.4 Effect of Exemplars
To test sensitivity to the number and choice of exemplars, we evaluate PaLM 540B while varying the set of exemplars used. Table 7 shows the effect of ablating the number of exemplars in the range 0–10. We observe that a single exemplar is sufficient to achieve strong results, using zero exemplars yields reasonably strong results, and gains from additional exemplars are marginal.
. | Lexical . | Entity . | Random . | FRMT . | |||
---|---|---|---|---|---|---|---|
Exemplars . | pt-BR . | pt-PT . | pt-BR . | pt-PT . | pt-BR . | pt-PT . | pt . |
0 | 50.7 (75.7) | 35.6 (71.2) | 56.4 (80.3) | 47.4 (77.6) | 53.0 (76.0) | 42.4 (73.6) | 47.2 (75.7) |
1 | 52.0 (77.1) | 39.7 (73.7) | 57.0 (81.2) | 49.1 (78.5) | 54.5 (77.0) | 45.1 (75.2) | 49.3 (77.1) |
5 | 53.2 (77.0) | 40.0 (74.0) | 58.5 (81.2) | 48.6 (78.7) | 54.8 (76.8) | 45.2 (75.3) | 49.8 (77.2) |
7 | 53.5 (77.1) | 40.0 (73.8) | 58.6 (81.3) | 48.8 (78.8) | 55.2 (77.0) | 45.8 (75.5) | 50.0 (77.2) |
10 | 53.7 (77.1) | 40.1 (73.9) | 59.0 (81.2) | 49.5 (79.0) | 54.8 (76.9) | 45.6 (75.5) | 50.2 (77.3) |
zh-CN | zh-TW | zh-CN | zh-TW | zh-CN | zh-TW | zh | |
0 | 32.7 (64.5) | 22.2 (61.3) | 40.3 (72.7) | 32.8 (70.2) | 38.7 (65.6) | 29.0 (63.1) | 32.3 (66.2) |
1 | 35.1 (66.4) | 24.6 (64.3) | 43.7 (74.2) | 35.2 (72.8) | 39.9 (67.6) | 31.1 (66.4) | 34.6 (68.6) |
5 | 35.1 (66.7) | 25.0 (63.9) | 44.7 (74.6) | 35.3 (72.8) | 40.0 (67.6) | 31.8 (66.7) | 35.0 (68.7) |
7 | 35.4 (66.6) | 25.3 (64.2) | 45.3 (74.7) | 34.9 (72.6) | 40.7 (68.0) | 30.5 (66.6) | 35.0 (68.8) |
10 | 34.8 (66.5) | 24.6 (63.4) | 44.8 (74.7) | 35.2 (72.5) | 40.0 (67.8) | 29.6 (66.0) | 34.5 (68.4) |
. | Lexical . | Entity . | Random . | FRMT . | |||
---|---|---|---|---|---|---|---|
Exemplars . | pt-BR . | pt-PT . | pt-BR . | pt-PT . | pt-BR . | pt-PT . | pt . |
0 | 50.7 (75.7) | 35.6 (71.2) | 56.4 (80.3) | 47.4 (77.6) | 53.0 (76.0) | 42.4 (73.6) | 47.2 (75.7) |
1 | 52.0 (77.1) | 39.7 (73.7) | 57.0 (81.2) | 49.1 (78.5) | 54.5 (77.0) | 45.1 (75.2) | 49.3 (77.1) |
5 | 53.2 (77.0) | 40.0 (74.0) | 58.5 (81.2) | 48.6 (78.7) | 54.8 (76.8) | 45.2 (75.3) | 49.8 (77.2) |
7 | 53.5 (77.1) | 40.0 (73.8) | 58.6 (81.3) | 48.8 (78.8) | 55.2 (77.0) | 45.8 (75.5) | 50.0 (77.2) |
10 | 53.7 (77.1) | 40.1 (73.9) | 59.0 (81.2) | 49.5 (79.0) | 54.8 (76.9) | 45.6 (75.5) | 50.2 (77.3) |
zh-CN | zh-TW | zh-CN | zh-TW | zh-CN | zh-TW | zh | |
0 | 32.7 (64.5) | 22.2 (61.3) | 40.3 (72.7) | 32.8 (70.2) | 38.7 (65.6) | 29.0 (63.1) | 32.3 (66.2) |
1 | 35.1 (66.4) | 24.6 (64.3) | 43.7 (74.2) | 35.2 (72.8) | 39.9 (67.6) | 31.1 (66.4) | 34.6 (68.6) |
5 | 35.1 (66.7) | 25.0 (63.9) | 44.7 (74.6) | 35.3 (72.8) | 40.0 (67.6) | 31.8 (66.7) | 35.0 (68.7) |
7 | 35.4 (66.6) | 25.3 (64.2) | 45.3 (74.7) | 34.9 (72.6) | 40.7 (68.0) | 30.5 (66.6) | 35.0 (68.8) |
10 | 34.8 (66.5) | 24.6 (63.4) | 44.8 (74.7) | 35.2 (72.5) | 40.0 (67.8) | 29.6 (66.0) | 34.5 (68.4) |
To measure the variance in performance across exemplar choice, we re-run PaLM 540B evaluation three times each using either 1 or 10 exemplars, resampling the exemplars on each run. We find that the choice of exemplar(s) has a relatively small effect—with 10 exemplars, the standard deviations of FRMT-BLEU and FRMT-BLEURT across all four runs (including the original) were below 0.5 in each language, and with just 1 exemplar, the standard deviations remained under 1.0.
6.5 Qualitative Analysis
To provide additional insights on regional differences and model behavior, we manually inspect dev set gold translations and model outputs, across the models sent to human evaluation. In both languages, we observe regional differences beyond just the lexical items underlying our lexical bucket. For instance, in Table 8 and similar examples, we find on <date> phrases tend to be translated with differing prepositions—em in pt-BR and a in pt-PT. As another example, in Table 9, we observe both gold and PaLM outputs use the term (chéngshì, en:program) only in zh-TW when translating the phrase “coding errors”.
Model . | Target:pt-BR . | Target:pt-PT . |
---|---|---|
Gold | A legalização do casamento entre pessoas do mesmo sexo em Portugal ocorreu em 17 de maio de 2010. | O casamento entre pessoas do mesmo sexo foi legalizado em Portugal a 17 de maio de 2010. |
PaLM | O casamento entre pessoas do mesmo sexo em Portugal foi legalizado em 17 de maio de 2010. | O casamento entre pessoas do mesmo sexo em Portugal foi legalizado a 17 de Maio de 2010. |
M4-Prompts | O casamento entre pessoas do mesmo sexo em Portugal foi legalizado em 17 de maio de 2010. | O casamento entre pessoas do mesmo sexo em Portugal foi legalizado a 17 de maio de 2010. |
M4-UR | O casamento homoafetivo em Portugal foi legalizado em 17 de Maio de 2010. | O casamento homoafetivo em Portugal foi legalizado a 17 de Maio de 2010. |
Model . | Target:pt-BR . | Target:pt-PT . |
---|---|---|
Gold | A legalização do casamento entre pessoas do mesmo sexo em Portugal ocorreu em 17 de maio de 2010. | O casamento entre pessoas do mesmo sexo foi legalizado em Portugal a 17 de maio de 2010. |
PaLM | O casamento entre pessoas do mesmo sexo em Portugal foi legalizado em 17 de maio de 2010. | O casamento entre pessoas do mesmo sexo em Portugal foi legalizado a 17 de Maio de 2010. |
M4-Prompts | O casamento entre pessoas do mesmo sexo em Portugal foi legalizado em 17 de maio de 2010. | O casamento entre pessoas do mesmo sexo em Portugal foi legalizado a 17 de maio de 2010. |
M4-UR | O casamento homoafetivo em Portugal foi legalizado em 17 de Maio de 2010. | O casamento homoafetivo em Portugal foi legalizado a 17 de Maio de 2010. |
In many cases, PaLM uses the expected region- specific lexical forms, as already reflected in our lexical accuracy metric. By contrast, we observe the M4-based models are more prone to use terms from the web-majority region (pt-BR and zh-CN) irrespective of the target. For example, in Table 9, PaLM matches gold translations in using the region-specific terms for software—zh-CN: (ruǎnjiàn), zh-TW: (ruǎntı̌)—while the M4-based models use the zh-CN term throughout (simplified: , traditional: ).
7 Conclusion
In this paper, we introduced FRMT, a new benchmark for evaluating few-shot region-aware machine translation. Our dataset covers 4 regions of Portuguese and Mandarin, and enables fine-grained comparison across region-matched and mismatched conditions, and across different classes of inputs (lexical, entity, random).
While we found the large-scale generalist model PaLM 540B to show impressive few-shot region control, there is still significant room for improvement. None of the models we evaluated match human performance, and the gap is particularly large in Mandarin. Additionally, there remains an open research question as to whether robust few-shot regional control can be achieved at more modest model scales.
We are eager to see progress on FRMT, as methods that do well in this few-shot setting are likely to be easily extensible to other regions and styles. We anticipate that the flexibility to adapt to new output styles in the absence of extensive labeled data will be a key factor in making generative text models more useful, inclusive, and equitable.
Acknowledgments
For helpful discussion and comments, we thank Jacob Eisenstein, Noah Fiedel, Macduff Hughes, and Mingfei Lau. For feedback around regional differences, we thank Andre Araujo, Chung-Ching Chang, Andreia Cunha, Filipe Gonçalves, Nuno Guerreiro, Mandy Guo, Luis Miranda, Vitor Rodrigues, and Linting Xue.
Notes
As Wikipedia data source we use the training split of wiki40b (v1.3.0) by Guo et al. (2020), available at https://www.tensorflow.org/datasets/catalog/wiki40b.
Using the icc function of R’s irr library (Gamer et al., 2019).
SacreBLEU version strings for {Portuguese,Mandarin}: BLEU—nrefs:1—case:mixed—eff:no—tok:{13a,zh}—smooth:exp—version: 2.3.1.
SacreBLEU version string: chrF2—nrefs:1—case:mixed—eff:yes—nc:6—nw:0—space:no—version:2.3.1.
We negate the correlations with MQM because higher quality corresponds to lower MQM scores.
Models may train on multilingual web crawl data, as is common practice, as long as supervised region labels are not provided. We allow that some implicit or explicit region labels may appear by chance within the unsupervised data.
Scripts available at https://bit.ly/frmt-task.
The model has a fixed input sequence length, including the prompt, and a fixed output sequence length. We ensure that the ten exemplars are short enough to leave at least 128 tokens for the input text, to match the 128 tokens allotted to the output.
https://translate.google.com, accessed April 4, 2022.
References
Author notes
Equal contribution.
Action Editor: Deyi Xiong