Skip to Main Content
Table 4:
Comparison of the error rates (%), the numbers of learnable weight parameters, and the search costs on the CIFAR-10 dataset. We run the proposed method ten times for each dataset and report the classification errors in the format of “best (mean ± std).” We refer to the architectures constructed by the proposed method as CGP-CNN. In CGP-CNN, the numbers of learnable weight parameters of the best architecture are reported. The values of other models are referenced from the literature.
Model# paramsTest errorGPU days
Maxout (Goodfellow et al., 2013) — 9.38 — 
Network in Network (Lin et al., 2014) — 8.81 — 
VGG (Simonyan and Zisserman, 2015) 15.2M 7.94 — 
ResNet (He et al., 2016) 1.7M 6.61 — 
FractalNet (Larsson et al., 2017) 38.6M 5.22 — 
Wide ResNet (Zagoruyko and Komodakis, 2016) 36.5M 4.00 — 
CoDeepNEAT (Miikkulainen et al., 2017) — 7.30 — 
Genetic CNN (Xie and Yuille, 2017) — 7.10 17 
MetaQNN (Baker et al., 2017) 3.7M 6.92 80--100 
Large-Scale Evolution (Real et al., 2017) 5.4M 5.40 2750 
Neural Architecture Search (Zoph and Le, 2017) 37.4M 3.65 16800--22400 
CGP-CNN (ConvSet) 1.50M 5.92 31 
  (6.48±0.48)  
CGP-CNN (ResSet) 2.01M 5.01 30 
  (6.10±0.89)  
Model# paramsTest errorGPU days
Maxout (Goodfellow et al., 2013) — 9.38 — 
Network in Network (Lin et al., 2014) — 8.81 — 
VGG (Simonyan and Zisserman, 2015) 15.2M 7.94 — 
ResNet (He et al., 2016) 1.7M 6.61 — 
FractalNet (Larsson et al., 2017) 38.6M 5.22 — 
Wide ResNet (Zagoruyko and Komodakis, 2016) 36.5M 4.00 — 
CoDeepNEAT (Miikkulainen et al., 2017) — 7.30 — 
Genetic CNN (Xie and Yuille, 2017) — 7.10 17 
MetaQNN (Baker et al., 2017) 3.7M 6.92 80--100 
Large-Scale Evolution (Real et al., 2017) 5.4M 5.40 2750 
Neural Architecture Search (Zoph and Le, 2017) 37.4M 3.65 16800--22400 
CGP-CNN (ConvSet) 1.50M 5.92 31 
  (6.48±0.48)  
CGP-CNN (ResSet) 2.01M 5.01 30 
  (6.10±0.89)  
Close Modal

or Create an Account

Close Modal
Close Modal