Skip to Main Content
Table 5: 
Ablation experiments for Joint-Multi models.
ModelsCTB-5CTB-7CTB-9
F1segF1udepUASF1ldepLASF1segF1udepUASF1ldepLASF1segF1udepUASF1ldepLAS
Joint-Multi 98.48 87.86 88.08 85.08 85.23 96.64 81.80 81.80 77.84 77.83 97.20 82.15 82.23 78.08 78.14 
 -pre-trained 97.72 82.56 82.70 79.8 70.93 95.52 76.35 76.22 72.16 72.04 96.56 78.93 78.93 74.35 74.37 
 -n-gram 97.72 83.44 83.60 80.24 80.41 95.21 77.37 77.11 72.94 72.69 95.85 78.55 78.41 73.94 73.81 
ModelsCTB-5CTB-7CTB-9
F1segF1udepUASF1ldepLASF1segF1udepUASF1ldepLASF1segF1udepUASF1ldepLAS
Joint-Multi 98.48 87.86 88.08 85.08 85.23 96.64 81.80 81.80 77.84 77.83 97.20 82.15 82.23 78.08 78.14 
 -pre-trained 97.72 82.56 82.70 79.8 70.93 95.52 76.35 76.22 72.16 72.04 96.56 78.93 78.93 74.35 74.37 
 -n-gram 97.72 83.44 83.60 80.24 80.41 95.21 77.37 77.11 72.94 72.69 95.85 78.55 78.41 73.94 73.81 

The ‘-pre-trained’ means the model is trained without the pre-trained embeddings.

The ‘-n-gram’ means the model is trained by removing the bigram and trigram embeddings, only randomly initialized and pre-trained character embeddings are used.

Close Modal

or Create an Account

Close Modal
Close Modal