Skip to Main Content
Table 5: 

Accuracy results for the Wikipedia sections distinction task. ♯ indicates that results are reproduced by ourselves; § indicates results are taken from Reimers and Gurevych (2019); Surrogate are results for our proposed method.

ModelAccuracy
Unsupervised Setting 
Avg. Glove embeddings 60.94 
Avg. Skip-Thought embeddings 61.54 
InferSent-Glove 63.39 
Avg. BERT embeddings 66.40 
BERT [CLS] 32.30 
BERTScore 67.29 
DPR 66.71 
BLEURT 67.39 
Universal Sent Encoder 65.18 
Surrogatebase 71.40 
Surrogatelarge 73.50 
 
Supervised Setting 
BERT [CLS] 78.13 
BERTbase 79.30 
SBERTbase§ 80.42 
SRoBERTabase§ 79.45 
Surrogatebase 83.10 
 
BERTlarge 80.15 
SBERTlarge§ 80.78 
SRoBERTalarge§ 79.73 
Surrogatelarge 83.50 
ModelAccuracy
Unsupervised Setting 
Avg. Glove embeddings 60.94 
Avg. Skip-Thought embeddings 61.54 
InferSent-Glove 63.39 
Avg. BERT embeddings 66.40 
BERT [CLS] 32.30 
BERTScore 67.29 
DPR 66.71 
BLEURT 67.39 
Universal Sent Encoder 65.18 
Surrogatebase 71.40 
Surrogatelarge 73.50 
 
Supervised Setting 
BERT [CLS] 78.13 
BERTbase 79.30 
SBERTbase§ 80.42 
SRoBERTabase§ 79.45 
Surrogatebase 83.10 
 
BERTlarge 80.15 
SBERTlarge§ 80.78 
SRoBERTalarge§ 79.73 
Surrogatelarge 83.50 
Close Modal

or Create an Account

Close Modal
Close Modal