Skip to Main Content
Table 4: 
Test performance on the TACRED relation extraction benchmark. BERTlarge and BERTEM+MTB from Soares et al. (2019) are the current state-of-the-art. *: BERTEM+MTB incorporated an intermediate “matching the blanks” pre-training on the entity-linked text based on English Wikipedia, which is not a direct comparison to ours trained only from raw text.
pRF1
BERTEM(Soares et al., 2019) − − 70.1 
BERTEM+MTB* − − 71.5 
Google BERT 69.1 63.9 66.4 
Our BERT 67.8 67.2 67.5 
Our BERT-1seq 72.4 67.9 70.1 
SpanBERT 70.8 70.9 70.8 
pRF1
BERTEM(Soares et al., 2019) − − 70.1 
BERTEM+MTB* − − 71.5 
Google BERT 69.1 63.9 66.4 
Our BERT 67.8 67.2 67.5 
Our BERT-1seq 72.4 67.9 70.1 
SpanBERT 70.8 70.9 70.8 
Close Modal

or Create an Account

Close Modal
Close Modal