Table 2: 

Comparison of multilingual QA evaluation sets. Answer independence indicates whether the gold answer is independent of a retrieved document, and parallel questions indicates whether examples are the same across languages.

Multilingual QAAnswerParallelLanguage Fam.LanguagesTotal Examples
Evaluation SetIndependenceQuestionsBranches
XQA (Liu et al., 2019a) ✓ × 28k 
MLQA (Lewis et al., 2020) × ✓ 46k 
XQuAD (Artetxe et al., 2020b) × ✓ 11 11 13k 
TyDi (Clark et al., 2020) × × 11 11 204k 
Xor-QA (Asai et al., 2021) × × 40k 
MKQA (This work) ✓ ✓ 14 26 260k 
Multilingual QAAnswerParallelLanguage Fam.LanguagesTotal Examples
Evaluation SetIndependenceQuestionsBranches
XQA (Liu et al., 2019a) ✓ × 28k 
MLQA (Lewis et al., 2020) × ✓ 46k 
XQuAD (Artetxe et al., 2020b) × ✓ 11 11 13k 
TyDi (Clark et al., 2020) × × 11 11 204k 
Xor-QA (Asai et al., 2021) × × 40k 
MKQA (This work) ✓ ✓ 14 26 260k 
Close Modal

or Create an Account

Close Modal
Close Modal