Table 4 
Evaluation results on the Ubuntu corpus. Subscripts including last, static, and dynamic indicate three approaches to predicting a matching score as described in Section 5.3. Numbers in bold mean that the improvement from the models is statistically significant over the best baseline method.
 R2@1R10@1R10@2R10@5
TF-IDF 0.659 0.410 0.545 0.708 
RNN 0.768 0.403 0.547 0.819 
CNN 0.848 0.549 0.684 0.896 
LSTM 0.901 0.638 0.784 0.949 
BiLSTM 0.895 0.630 0.780 0.944 
  
Multi-View 0.908 0.662 0.801 0.951 
DL2R 0.899 0.626 0.783 0.944 
  
MV-LSTM 0.906 0.653 0.804 0.946 
Match-LSTM 0.904 0.653 0.799 0.944 
Attentive-LSTM 0.903 0.633 0.789 0.943 
SCNsingle 0.904 0.656 0.809 0.942 
SANsingle 0.906 0.662 0.810 0.945 
  
SCNlast 0.923 0.723 0.842 0.956 
SCNstatic 0.927 0.725 0.838 0.962 
SCNdynamic 0.926 0.726 0.847 0.961 
  
SANlast 0.930 0.733 0.850 0.961 
SANstatic 0.932 0.734 0.852 0.962 
SANdynamic 0.932 0.733 0.851 0.961 
 R2@1R10@1R10@2R10@5
TF-IDF 0.659 0.410 0.545 0.708 
RNN 0.768 0.403 0.547 0.819 
CNN 0.848 0.549 0.684 0.896 
LSTM 0.901 0.638 0.784 0.949 
BiLSTM 0.895 0.630 0.780 0.944 
  
Multi-View 0.908 0.662 0.801 0.951 
DL2R 0.899 0.626 0.783 0.944 
  
MV-LSTM 0.906 0.653 0.804 0.946 
Match-LSTM 0.904 0.653 0.799 0.944 
Attentive-LSTM 0.903 0.633 0.789 0.943 
SCNsingle 0.904 0.656 0.809 0.942 
SANsingle 0.906 0.662 0.810 0.945 
  
SCNlast 0.923 0.723 0.842 0.956 
SCNstatic 0.927 0.725 0.838 0.962 
SCNdynamic 0.926 0.726 0.847 0.961 
  
SANlast 0.930 0.733 0.850 0.961 
SANstatic 0.932 0.734 0.852 0.962 
SANdynamic 0.932 0.733 0.851 0.961 
Close Modal

or Create an Account

Close Modal
Close Modal