Skip to Main Content
Table 3: 
Comparison of Accuracy and Macro-F1 among different models. Results marked with are adopted from Chen et al. (2017), and those with * are adopted from the original papers. Performance improvements of the proposed TG-SAN model over the state-of-the-art, TNet (Li et al., 2018), are statistically significant at p ¡ 0.01.
ModelsTweetLaptopRestaurant
AccuracyMacro-F1AccuracyAccuracyMacro-F1
Baselines SVM (2014) 0.6340 0.6330 0.7049* − 0.8016* − 
AdaRNN (2014) 0.6630* 0.6590* − − − − 
TD-LSTM (2016a) 0.6662 0.6401 0.7183 0.6843 0.7800 0.6673 
 
Attention-based ATAE-LSTM (2016) − − 0.6870* − 0.7720* − 
IAN (2017) − − 0.7210* − 0.7860* − 
MemNet (2016b) 0.6850 0.6691 0.7033 0.6409 0.7816 0.6583 
RAM (2017) 0.6936* 0.6730* 0.7449* 0.7135* 0.8023* 0.7080* 
 
State-of-the-art TNet (2018) 0.7327 0.7132 0.7465 0.6985 0.8005 0.6901 
 
Proposed Model TG-SAN 0.7471 0.7365 0.7527 0.7118 0.8166 0.7259 
 
Ablations w/o CFU 0.7312 0.7141 0.7465 0.7042 0.8095 0.7189 
w/o SCU & CFU 0.7153 0.6975 0.7058 0.6559 0.8023 0.6960 
w/o TG 0.7269 0.7093 0.7324 0.6923 0.8131 0.6986 
ModelsTweetLaptopRestaurant
AccuracyMacro-F1AccuracyAccuracyMacro-F1
Baselines SVM (2014) 0.6340 0.6330 0.7049* − 0.8016* − 
AdaRNN (2014) 0.6630* 0.6590* − − − − 
TD-LSTM (2016a) 0.6662 0.6401 0.7183 0.6843 0.7800 0.6673 
 
Attention-based ATAE-LSTM (2016) − − 0.6870* − 0.7720* − 
IAN (2017) − − 0.7210* − 0.7860* − 
MemNet (2016b) 0.6850 0.6691 0.7033 0.6409 0.7816 0.6583 
RAM (2017) 0.6936* 0.6730* 0.7449* 0.7135* 0.8023* 0.7080* 
 
State-of-the-art TNet (2018) 0.7327 0.7132 0.7465 0.6985 0.8005 0.6901 
 
Proposed Model TG-SAN 0.7471 0.7365 0.7527 0.7118 0.8166 0.7259 
 
Ablations w/o CFU 0.7312 0.7141 0.7465 0.7042 0.8095 0.7189 
w/o SCU & CFU 0.7153 0.6975 0.7058 0.6559 0.8023 0.6960 
w/o TG 0.7269 0.7093 0.7324 0.6923 0.8131 0.6986 
Close Modal

or Create an Account

Close Modal
Close Modal