Table 7 
BLEU scores and perplexity for eval-AWD-LSTM-based evaluation on texts generated from the COCO image data set by different GAN models: SeqGAN (Yu et al. 2017), MaliGAN (Che et al. 2017), RankGAN (Lin et al. 2017), LeakGAN (Guo et al. 2018), and TextGAN (Zhang et al. 2017).
 SeqGANMaliGANRankGANLeakGANTextGANMLEImageCoco
BLEU-2 0.92 0.89 0.94 0.93 0.65 0.92 1.00 
BLEU-3 0.75 0.70 0.80 0.82 0.65 0.68 1.00 
BLEU-4 0.53 0.48 0.60 0.66 0.60 0.57 1.00 
BLEU-5 0.35 0.31 0.41 0.47 0.52 0.39 1.00 
eval-AWD-LSTM 179.29 272.53 132.90 146.26 129.93 176.34 44.17 
 SeqGANMaliGANRankGANLeakGANTextGANMLEImageCoco
BLEU-2 0.92 0.89 0.94 0.93 0.65 0.92 1.00 
BLEU-3 0.75 0.70 0.80 0.82 0.65 0.68 1.00 
BLEU-4 0.53 0.48 0.60 0.66 0.60 0.57 1.00 
BLEU-5 0.35 0.31 0.41 0.47 0.52 0.39 1.00 
eval-AWD-LSTM 179.29 272.53 132.90 146.26 129.93 176.34 44.17 
Close Modal

or Create an Account

Close Modal
Close Modal