Table 5: 
Results on MSR Sentence Completion. For our models (top), we show accuracies for Wikipedia (left) and in-domain (right) training. We compare with language models from prior work (left): Kneser-Ney 5-gram model (Mikolov, 2012), Word RNN (Zweig et al., 2012), Word LSTM and LdTreeLSTM (Zhang et al., 2016). We further report models incorporating distributional encodings of semantics (right): Skipgram(+RNNs) from Mikolov et al. (2013a), the PMI-based model of Woods (2016), and the Context-Embedding-based approach of Melamud et al. (2016).
Our models (wiki/in-domain) 
 LSTM 34.1/59.0  
 RNN 24.3/24.0  
 WordNLM 37.1/63.3  
 
From the literature 
KN5 40.0 Skipgram 48.0 
Word RNN 45.0 Skipgram + RNNs 58.9 
Word LSTM 56.0 PMI 61.4 
LdTreeLSTM 60.7 Context-Embed 65.1 
Our models (wiki/in-domain) 
 LSTM 34.1/59.0  
 RNN 24.3/24.0  
 WordNLM 37.1/63.3  
 
From the literature 
KN5 40.0 Skipgram 48.0 
Word RNN 45.0 Skipgram + RNNs 58.9 
Word LSTM 56.0 PMI 61.4 
LdTreeLSTM 60.7 Context-Embed 65.1 
Close Modal

or Create an Account

Close Modal
Close Modal