Table 4

Automated evaluation of explainability on TACRED, in which we compare explainability annotations produced by these methods against the lexical artifacts of rules.

ApproachPrecisionRecallF1
Attention 30.28 30.28 30.28 
Saliency Mapping 30.22 30.22 30.22 
LIME 30.45 36.84 32.49 
Unsupervised Rationale 4.65 79.53 8.51 
SHAP 31.27 31.27 31.27 
CXPlain 53.60 53.60 53.60 
Greedy Adding 40.47 50.53 40.81 
All words in between SUBJ & OBJ 71.48 86.33 78.21 
Our Approach 95.63 97.92 95.76 
ApproachPrecisionRecallF1
Attention 30.28 30.28 30.28 
Saliency Mapping 30.22 30.22 30.22 
LIME 30.45 36.84 32.49 
Unsupervised Rationale 4.65 79.53 8.51 
SHAP 31.27 31.27 31.27 
CXPlain 53.60 53.60 53.60 
Greedy Adding 40.47 50.53 40.81 
All words in between SUBJ & OBJ 71.48 86.33 78.21 
Our Approach 95.63 97.92 95.76 
Close Modal

or Create an Account

Close Modal
Close Modal