Skip to Main Content
Table 4: 
Comparison with the pipeline model. Our Joint-Multi models outperform the pipeline models in a large margin. When BERT is used, the dependency parsing performance was significantly improved, although the Chinese word segmentation does not meliorate a lot.
ModelsCTB-5CTB-7CTB-9
F1segF1udepUASF1ldepLASF1segF1udepUASF1ldepLASF1segF1udepUASF1ldepLAS
Biaffine − − 88.81 − 85.63 − − 86.06 − 81.33 − − 86.21 − 81.57 
 
Pipeline§ 98.50 86.50 86.71 83.46 83.67 96.50 80.62 80.49 76.58 76.46 97.09 81.54 81.61 77.34 77.40 
Joint-Multi 98.48 87.86 88.08 85.08 85.23 96.64 81.80 81.80 77.84 77.83 97.20 82.15 82.23 78.08 78.14 
Joint-Multi-BERT 98.46 89.59 89.97 85.94 86.3 97.06 85.06 85.12 80.71 80.76 97.63 85.66 85.74 81.71 81.77 
ModelsCTB-5CTB-7CTB-9
F1segF1udepUASF1ldepLASF1segF1udepUASF1ldepLASF1segF1udepUASF1ldepLAS
Biaffine − − 88.81 − 85.63 − − 86.06 − 81.33 − − 86.21 − 81.57 
 
Pipeline§ 98.50 86.50 86.71 83.46 83.67 96.50 80.62 80.49 76.58 76.46 97.09 81.54 81.61 77.34 77.40 
Joint-Multi 98.48 87.86 88.08 85.08 85.23 96.64 81.80 81.80 77.84 77.83 97.20 82.15 82.23 78.08 78.14 
Joint-Multi-BERT 98.46 89.59 89.97 85.94 86.3 97.06 85.06 85.12 80.71 80.76 97.63 85.66 85.74 81.71 81.77 
† 

The results are evaluated by a word-level biaffine parser on the gold-segmented sentences.

§ 

The pipeline model first uses the Joint-SegOnly model to segment the sentence, then uses the word-level biaffine parser to obtain the parsing result.

Close Modal

or Create an Account

Close Modal
Close Modal