Skip to Main Content
Table 2: 
Question answering results. Accuracy averaged over the 20 bAbI tasks. Using tanh is worse than ReLU (line 13 vs. 15). RUM 150 λ = 0 without an update gate drops by 1.7% compared with line 13.
ModelAcc.(%)
 Word Level 
LSTM (Weston et al., 2016) 49.2 
uRNN (ours) 51.6 
EURNN (ours) 52.9 
LSTM (ours) 56.0 
GRU (ours) 58.2 
GORU (Jing et al., 2017a) 60.4 
RUM λ = 0 (ours) 73.2 
DNC (Graves et al., 2016) 96.2 
 Sentence Level 
EUNN/attnEUNN (ours) 66.7/69.5 
10 LSTM/attnLSTM (ours) 67.2/80.1 
11 GRU/attnGRU (ours) 70.4/77.3 
12 GORU/attnGORU (ours) 71.3/76.4 
13 RUM/attnRUM λ = 0 (ours) 75.1/74.3 
14 RUM/attnRUM λ = 1 (ours) 79.0/80.1 
15 RUM/attnRUM λ = 0 w/ tanh (ours) 70.5/72.9 
16 MemN2N (Sukhbaatar et al., 2015) 95.8 
17 GMemN2N (Perez and Liu, 2017) 96.3 
18 DMN+ (Xiong et al., 2016) 97.2 
19 EntNet (Henaff et al., 2017) 99.5 
20 QRN (Seo et al., 2017) 99.7 
ModelAcc.(%)
 Word Level 
LSTM (Weston et al., 2016) 49.2 
uRNN (ours) 51.6 
EURNN (ours) 52.9 
LSTM (ours) 56.0 
GRU (ours) 58.2 
GORU (Jing et al., 2017a) 60.4 
RUM λ = 0 (ours) 73.2 
DNC (Graves et al., 2016) 96.2 
 Sentence Level 
EUNN/attnEUNN (ours) 66.7/69.5 
10 LSTM/attnLSTM (ours) 67.2/80.1 
11 GRU/attnGRU (ours) 70.4/77.3 
12 GORU/attnGORU (ours) 71.3/76.4 
13 RUM/attnRUM λ = 0 (ours) 75.1/74.3 
14 RUM/attnRUM λ = 1 (ours) 79.0/80.1 
15 RUM/attnRUM λ = 0 w/ tanh (ours) 70.5/72.9 
16 MemN2N (Sukhbaatar et al., 2015) 95.8 
17 GMemN2N (Perez and Liu, 2017) 96.3 
18 DMN+ (Xiong et al., 2016) 97.2 
19 EntNet (Henaff et al., 2017) 99.5 
20 QRN (Seo et al., 2017) 99.7 
Close Modal

or Create an Account

Close Modal
Close Modal