Skip to Main Content
Table 8

Dependency SRL results on the CoNLL 2009 English WSJ test set, in terms of labeled attachment score for syntactic dependencies (LAS), semantic precision (P), semantic recall (R), semantic labeled F1 score (Sem-F1), and the ratio Sem-F1/LAS. A superscript “*” indicates LAS results from our personal communication with the authors.

SystemPLMLASPRSem-F1Sem-F1/LAS
(Zhao et al. 2009b)   86.0 − − 85.4 99.30 
(Zhao et al. 2009a)   89.2 − − 86.2 96.64 
(Björkelund et al. 2010)   89.8 87.1 84.5 85.8 95.55 
(Lei et al. 2015)   90.4 − − 86.6 95.80 
(FitzGerald et al. 2015)   90.4 − − 86.7 95.90 
(Roth and Lapata 2016)   89.8 88.1 85.3 86.7 96.5 
(Marcheggiani and Titov 2017)   90.34* 89.1 86.8 88.0 97.41 
 
Sequence-based + K-order hard pruning CoNLL09 predicted +E 86.0 89.7 89.3 89.5 104.07 
STG Auto syntax +E 90.0 90.5 89.3 89.9 99.89 
Gold syntax +E 100.0 91.0 89.7 90.3 90.30 
 
Sequence-based + Syntax GCN encoder CoNLL09 predicted +E 86.0 90.5 88.5 89.5 104.07 
Biaffine Parser +E 90.22 90.3 89.3 89.8 99.53 
BIST Parser +E 90.05 90.3 89.1 89.7 99.61 
Gold syntax +E 100.0 91.0 90.0 90.5 90.50 
SystemPLMLASPRSem-F1Sem-F1/LAS
(Zhao et al. 2009b)   86.0 − − 85.4 99.30 
(Zhao et al. 2009a)   89.2 − − 86.2 96.64 
(Björkelund et al. 2010)   89.8 87.1 84.5 85.8 95.55 
(Lei et al. 2015)   90.4 − − 86.6 95.80 
(FitzGerald et al. 2015)   90.4 − − 86.7 95.90 
(Roth and Lapata 2016)   89.8 88.1 85.3 86.7 96.5 
(Marcheggiani and Titov 2017)   90.34* 89.1 86.8 88.0 97.41 
 
Sequence-based + K-order hard pruning CoNLL09 predicted +E 86.0 89.7 89.3 89.5 104.07 
STG Auto syntax +E 90.0 90.5 89.3 89.9 99.89 
Gold syntax +E 100.0 91.0 89.7 90.3 90.30 
 
Sequence-based + Syntax GCN encoder CoNLL09 predicted +E 86.0 90.5 88.5 89.5 104.07 
Biaffine Parser +E 90.22 90.3 89.3 89.8 99.53 
BIST Parser +E 90.05 90.3 89.1 89.7 99.61 
Gold syntax +E 100.0 91.0 90.0 90.5 90.50 
Close Modal

or Create an Account

Close Modal
Close Modal