Skip to Main Content
Table 2: 
Main results in the test set of different datasets. Our Joint-Multi model achieves superior performance than previous joint models. The Joint-Multi-BERT further enhances the performance of dependency parsing significantly.
ModelsCTB-5CTB-7CTB-9
F1segF1udepF1segF1udepF1segF1udep
Hatori et al. (2012) 97.75 81.56 95.42 73.58 − − 
Zhang et al. (2014) STD 97.67 81.63 95.53 75.63 − − 
Zhang et al. (2014) EAG 97.76 81.70 95.39 75.56 − − 
Zhang et al. (2015) 98.04 82.01 − − − − 
Kurita et al. (2017) 98.37 81.42 95.86 74.04 − − 
 
Joint-Binary 98.45 87.24 96.57 81.34 97.10 81.67 
Joint-Multi 98.48 87.86 96.64 81.80 97.20 82.15 
Joint-Multi-BERT 98.46 89.59 97.06 85.06 97.63 85.66 
ModelsCTB-5CTB-7CTB-9
F1segF1udepF1segF1udepF1segF1udep
Hatori et al. (2012) 97.75 81.56 95.42 73.58 − − 
Zhang et al. (2014) STD 97.67 81.63 95.53 75.63 − − 
Zhang et al. (2014) EAG 97.76 81.70 95.39 75.56 − − 
Zhang et al. (2015) 98.04 82.01 − − − − 
Kurita et al. (2017) 98.37 81.42 95.86 74.04 − − 
 
Joint-Binary 98.45 87.24 96.57 81.34 97.10 81.67 
Joint-Multi 98.48 87.86 96.64 81.80 97.20 82.15 
Joint-Multi-BERT 98.46 89.59 97.06 85.06 97.63 85.66 

STD and EAG in Zhang et al. (2014) denote the arc-standard and the arc-eager models.

F1seg and F1udep are the F1 score for Chinese word segmentation and unlabeled dependency parsing, respectively.

Close Modal

or Create an Account

Close Modal
Close Modal