Skip to Main Content
Table 2: 
Accuracy, RMSE, and parameter values of competing models for all data sets. An asterisk (*) indicates customization methods first introduced in this paper. A dash (-) indicates the model is too big to be trained in an NVIDIA 1080 Ti GPU. Boldface indicates that the performance of basis-customization is significantly better (p < 0.05) than that of a simple customization. Values colored are performance weaker than that of the BiLSTM model, thus customization hurts the performance in those cases.
ModelsYelp 2013AAPRPolMed
AccuracyRMSEParamAccuracyParamAccuracyParam
Base: BiLSTM 63.7 0.687 442k 61.70 188k 40.30 86k 
bias vector (concat) cust 66.3 0.661 1.3m 65.30 6.3m 40.57 121k 
 basis-cust 66.9 0.654 653k 64.80 1.7m 40.92 95k 
linear trasformation* cust   4.6m 63.55 6.3m  379k 
 basis-cust 67.1 0.662 655k 65.75 1.7m 41.89 96k 
attention pooling cust 65.4 0.674 1.3m 62.80 6.3m 40.93 119k 
 basis-cust 66.0 0.671 652k 65.85 1.7m 41.73 95k 
encoder weights* cust  43.5m 
 basis-cust 66.1 0.665 1.5m 66.15 2.1m 41.42 179k 
word embedding* cust   294m 40.84 46.0m 
 basis-cust 66.1 0.666 1.0m 65.80 2.0m 41.58 455k 
ModelsYelp 2013AAPRPolMed
AccuracyRMSEParamAccuracyParamAccuracyParam
Base: BiLSTM 63.7 0.687 442k 61.70 188k 40.30 86k 
bias vector (concat) cust 66.3 0.661 1.3m 65.30 6.3m 40.57 121k 
 basis-cust 66.9 0.654 653k 64.80 1.7m 40.92 95k 
linear trasformation* cust   4.6m 63.55 6.3m  379k 
 basis-cust 67.1 0.662 655k 65.75 1.7m 41.89 96k 
attention pooling cust 65.4 0.674 1.3m 62.80 6.3m 40.93 119k 
 basis-cust 66.0 0.671 652k 65.85 1.7m 41.73 95k 
encoder weights* cust  43.5m 
 basis-cust 66.1 0.665 1.5m 66.15 2.1m 41.42 179k 
word embedding* cust   294m 40.84 46.0m 
 basis-cust 66.1 0.666 1.0m 65.80 2.0m 41.58 455k 
Close Modal

or Create an Account

Close Modal
Close Modal