Table 5

Automated evaluation of explainability on CoNLL04, in which we compare explainability annotations produced by these methods against the lexical artifacts of rules.

ApproachPrecisionRecallF1
Attention 69.44 69.44 69.44 
Saliency Mapping 42.42 42.42 42.42 
LIME 62.45 89.39 68.45 
Unsupervised Rationale 5.47 86.94 9.84 
SHAP 34.85 34.85 34.85 
CXPlain 50.00 50.00 50.00 
Greedy Adding 23.24 54.55 29.58 
All words in between SUBJ & OBJ 72.99 96.59 77.29 
Our Approach 99.29 100 99.52 
ApproachPrecisionRecallF1
Attention 69.44 69.44 69.44 
Saliency Mapping 42.42 42.42 42.42 
LIME 62.45 89.39 68.45 
Unsupervised Rationale 5.47 86.94 9.84 
SHAP 34.85 34.85 34.85 
CXPlain 50.00 50.00 50.00 
Greedy Adding 23.24 54.55 29.58 
All words in between SUBJ & OBJ 72.99 96.59 77.29 
Our Approach 99.29 100 99.52 
Close Modal

or Create an Account

Close Modal
Close Modal