Skip to Main Content
Table 5:
Comparison of the error rates (%) and the numbers of learnable weight parameters on the CIFAR-100 dataset. We run the proposed method ten times for each dataset and report the classification errors in the format of “best (mean ± std).” We refer to the architectures constructed by the proposed method as CGP-CNN. In CGP-CNN, the numbers of learnable weight parameters of the best architecture are reported. The values of other models except for VGG and ResNet are referenced from the literature.
Model# paramsTest error
Maxout (Goodfellow et al., 2013) — 38.57 
Network in Network (Lin et al., 2014) — 35.68 
VGG (Simonyan and Zisserman, 2015) 15.2M 33.45 
ResNet (He et al., 2016) 1.7M 32.40 
FractalNet (Larsson et al., 2017) 38.6M 23.30 
Wide ResNet (Zagoruyko and Komodakis, 2016) 36.5M 19.25 
CoDeepNEAT (Miikkulainen et al., 2017) — — 
Neural Architecture Search (Zoph and Le, 2017) 37.4M — 
Genetic CNN (Xie and Yuille, 2017) — 29.03 
MetaQNN (Baker et al., 2017) 3.7M 27.14 
Large-Scale Evolution (Real et al., 2017) 40.4M 23.0 
CGP-CNN (ConvSet) 2.01M 26.7(28.1±0.83) 
CGP-CNN (ResSet) 4.60M 25.1(26.8±1.21) 
Model# paramsTest error
Maxout (Goodfellow et al., 2013) — 38.57 
Network in Network (Lin et al., 2014) — 35.68 
VGG (Simonyan and Zisserman, 2015) 15.2M 33.45 
ResNet (He et al., 2016) 1.7M 32.40 
FractalNet (Larsson et al., 2017) 38.6M 23.30 
Wide ResNet (Zagoruyko and Komodakis, 2016) 36.5M 19.25 
CoDeepNEAT (Miikkulainen et al., 2017) — — 
Neural Architecture Search (Zoph and Le, 2017) 37.4M — 
Genetic CNN (Xie and Yuille, 2017) — 29.03 
MetaQNN (Baker et al., 2017) 3.7M 27.14 
Large-Scale Evolution (Real et al., 2017) 40.4M 23.0 
CGP-CNN (ConvSet) 2.01M 26.7(28.1±0.83) 
CGP-CNN (ResSet) 4.60M 25.1(26.8±1.21) 
Close Modal

or Create an Account

Close Modal
Close Modal