Table 1:
Summary of the Representative Deep Learning Models.
ArchitectureRepresentative ModelsModel Features
Deep belief net RBM (Zhang et al., 2018) A generative graphic model that uses the energy to capture the probability distribution between visible units and hidden units. 
 SRBM (Chen et al., 2017) A sparse variant that each hidden unit connects to part of the visible units, preventing the model overfitting based on hierarchical latent tree analysis. 
 FRBM (Ning et al., 2018) A fast variant trained by the lean CD algorithm in which the bounds-based filtering and delta product reduce the redundant dot product calculations. 
 TTRBM (Ju et al., 2019) A compact variant that the parameters between the visible layer and hidden layer are reduced by transforming into the tensor-train format. 
Stacked autoencoder AE (Michael et al., 2018) A basic fully connected network that uses the encoder-decoder strategy in an unsupervised manner to learn intrinsic features of data. 
 DAE (Vincent et al., 2008) A denoising variant that reconstructs the clear data from the noising data. 
 SAE (Makhzani & Frey, 2013) A sparse variant that captures the sparse representations of the input by adding the constraint into the loss function. 
 GAE (Hou et al., 2019) An adversarial variant that the decoder subnetwork that is also regarded as the generator, adopting game theory to more consistent features with input data. 
 FAE (Ashfahani et al., 2019) An evolving variant that constructs an adaptive network structure in the learning of representations, based on the network significance. 
 BAE (Angshul, 2019) An evolving variant adding the path-loss term in the loss function based on dictionary learning. 
Convolutional neural network Alexnet (Krizhevsky, Sutskever, & Hinton, 2012) The nonsaturating neurons and the dropout are adopted in the nonlinear computational layers, based on a GPU implementation, respectively. 
 ResNet (He et al., 2016) A shortcut connection is used to cross several layers to back propagate the network loss to previous layers. 
 Inception (Christian et al., 2017) A deeper and wider network is designed by using the uniform grid size for the blocks with auxiliary information. 
 SEnet (Cao et al., 2019) Informational embedding and adaption recalibration are regarded as self-attention operations. 
 ECNN (Sandler et al., 2018) The low-rank convolution replaces the full-rank convolution to improve the learning efficiency without much accuracy loss. 
Recurrent neural network RNN (Zhang et al., 2014) A fully connected network where the self-connection between hidden layers is used to model the time dependency. 
 BiRNN (Schuster & Paliwal, 1997) Two independent computing processes are used to encode the forward and the backward dependency. 
 LSTM (Hochreiter & Schmidhuber, 1997) The memory block is introduced to model the long-time dependency well. 
 SRNN (Lei et al., 2018) A fast variant in which the light recurrence and highway network are proposed to improve the learning efficiency for a parallelized implementation. 
 VRNN (Jang et al., 2019) A variational variant that uses the variational encoder-decoder strategy to model the temporal intrinsic features. 
ArchitectureRepresentative ModelsModel Features
Deep belief net RBM (Zhang et al., 2018) A generative graphic model that uses the energy to capture the probability distribution between visible units and hidden units. 
 SRBM (Chen et al., 2017) A sparse variant that each hidden unit connects to part of the visible units, preventing the model overfitting based on hierarchical latent tree analysis. 
 FRBM (Ning et al., 2018) A fast variant trained by the lean CD algorithm in which the bounds-based filtering and delta product reduce the redundant dot product calculations. 
 TTRBM (Ju et al., 2019) A compact variant that the parameters between the visible layer and hidden layer are reduced by transforming into the tensor-train format. 
Stacked autoencoder AE (Michael et al., 2018) A basic fully connected network that uses the encoder-decoder strategy in an unsupervised manner to learn intrinsic features of data. 
 DAE (Vincent et al., 2008) A denoising variant that reconstructs the clear data from the noising data. 
 SAE (Makhzani & Frey, 2013) A sparse variant that captures the sparse representations of the input by adding the constraint into the loss function. 
 GAE (Hou et al., 2019) An adversarial variant that the decoder subnetwork that is also regarded as the generator, adopting game theory to more consistent features with input data. 
 FAE (Ashfahani et al., 2019) An evolving variant that constructs an adaptive network structure in the learning of representations, based on the network significance. 
 BAE (Angshul, 2019) An evolving variant adding the path-loss term in the loss function based on dictionary learning. 
Convolutional neural network Alexnet (Krizhevsky, Sutskever, & Hinton, 2012) The nonsaturating neurons and the dropout are adopted in the nonlinear computational layers, based on a GPU implementation, respectively. 
 ResNet (He et al., 2016) A shortcut connection is used to cross several layers to back propagate the network loss to previous layers. 
 Inception (Christian et al., 2017) A deeper and wider network is designed by using the uniform grid size for the blocks with auxiliary information. 
 SEnet (Cao et al., 2019) Informational embedding and adaption recalibration are regarded as self-attention operations. 
 ECNN (Sandler et al., 2018) The low-rank convolution replaces the full-rank convolution to improve the learning efficiency without much accuracy loss. 
Recurrent neural network RNN (Zhang et al., 2014) A fully connected network where the self-connection between hidden layers is used to model the time dependency. 
 BiRNN (Schuster & Paliwal, 1997) Two independent computing processes are used to encode the forward and the backward dependency. 
 LSTM (Hochreiter & Schmidhuber, 1997) The memory block is introduced to model the long-time dependency well. 
 SRNN (Lei et al., 2018) A fast variant in which the light recurrence and highway network are proposed to improve the learning efficiency for a parallelized implementation. 
 VRNN (Jang et al., 2019) A variational variant that uses the variational encoder-decoder strategy to model the temporal intrinsic features. 

Notes: RBM: restricted Boltzmann machine; SRBM: sparse restricted Boltzmann machine; FRBM: fast restricted Boltzmann machine; TTRBM: tensor-train restricted Boltzmann machine; AE: autoencoder; DAE: denoising autoencoder; SAE: K-sparse autoencoder; GAE: generative autoencoder; FAE: fast autoencoder; BAE: blind autoencoder; Alexnet: Alex convolutional net; ResNet: residual convolutional net; Inception: Inception; SEnet: squeeze excitation network; ECNN: efficient convolutional neural network; RNN: recurrent neural network; BiRNN: bidirectional recurrent neural network; LSTM: long short-term memory; SRNN: slight recurrent neural network; VRNN: variational recurrent neural network.

Close Modal

or Create an Account

Close Modal
Close Modal