## Abstract

We explore classifier training for data sets with very few labels. We investigate this task using a neural network for nonnegative data. The network is derived from a hierarchical normalized Poisson mixture model with one observed and two hidden layers. With the single objective of likelihood optimization, both labeled and unlabeled data are naturally incorporated into learning. The neural activation and learning equations resulting from our derivation are concise and local. As a consequence, the network can be scaled using standard deep learning tools for parallelized GPU implementation. Using standard benchmarks for nonnegative data, such as text document representations, MNIST, and NIST SD19, we study the classification performance when very few labels are used for training. In different settings, the network's performance is compared to standard and recently suggested semisupervised classifiers. While other recent approaches are more competitive for many labels or fully labeled data sets, we find that the network studied here can be applied to numbers of few labels where no other system has been reported to operate so far.

## 1  Introduction

Large data sets (e.g., in the form of digital texts, images, sounds, or medical measurements) are becoming increasingly ubiquitous. Classification of such data has since long been identified as a central task of machine learning because of its many practical applications. If data sets are fully labeled, standard deep neural networks (DNNs), such as multilayer perceptrons (compare Rosenblatt, 1958; Ivakhnenko & Lapa, 1965) and their many modern versions, are often the method of choice. For many current benchmarks, large DNNs show state-of-the-art performance and can often exceed human abilities in specific data domains (see Schmidhuber, 2015; Bengio, Courville, & Vincent, 2013; Hinton et al., 2012, for reviews). However, the creation of fully labeled data sets becomes increasingly costly with increasing data points. While acquisition of the data points themselves is usually relatively easy (e.g., consider digital photos or the recording of sounds), correct labeling of the acquired data requires the availability of ground truth or a human who can hand-label the data. Neither ground truth nor labels provided by humans are available for most data sets, however. Furthermore, human labels may be erratic, especially for large data sets. The same applies to automatic or semiautomatic procedures that can provide labels. Depending on the data, acquiring even a few labels can be very costly. If, for example, data points consist of a set of medical measurements, a label in the form of a diagnosis requires the time and knowledge of a human medical expert. Hence, by considering large data sets or data sets with considerable cost per label, the following research question naturally emerges: How can a good classifier be trained for data sets with only few labels?

Indeed, classifiers leveraging information from labeled and unlabeled data points (semisupervised classifiers) have in recent years shifted into the focus of many research groups (Liu, He, & Chang, 2010; Weston, Ratle, Mobahi, & Collobert, 2012; Pitelis, Russell, & Agapito, 2014; Kingma, Mohamed, Rezende, & Welling, 2014; Forster, Sheikh, & Lücke, 2015; Rasmus, Berglund, Honkala, Valpola, & Raiko, 2015; Miyato, Maeda, Koyama, Nakae, & Ishii, 2016). If classifiers can successfully be trained using just few labels, they enable applications in many practically relevant settings. For example, classifiers for new data sets can be obtained in a very limited amount of time if labels for only few data points need to be provided; classifiers that perform poorly because of erratic labels in large training sets can be replaced by classifiers that are trained on the same data using only a few reliable labels; in settings where a stream of unlabeled data is constantly available (e.g., frames of a video, texts in chat rooms), a classifier could be obtained online if humans provide a few labels interactively. In all these examples, the total number of labels required to train the classifier is the main factor that determines its applicability.

But how far can we reduce the required total number of labels? And how strong can we expect a classifier to be in the limit of very few labels? In order to extract information from labeled and unlabeled data points, most successful contributions use hybrid combinations of two or more learning algorithms in order to merge unsupervised and supervised learning mechanisms (see, e.g., Weston et al., 2012; Kingma et al., 2014; Rasmus et al., 2015; Miyato et al., 2016). Typically, a standard DNN is used for the supervised part. Such DNNs are always equipped with a set of tunable parameters (i.e., hyperparameters or free parameters)—for example, for network architecture, activation functions, regularization, dropout, sparsity, gradient ascent types, learning rates, and early stopping. The unsupervised part adds further tunable parameters, and still more parameters are required to organize the interplay between supervised and unsupervised learning. For fully supervised learning, the problem of finding good values for the set of free parameters has been identified as its own research topic (see, e.g., Thornton, Hutter, Hoos, & Leyton-Brown, 2013; Bergstra, Yamins, & Cox, 2013; Hutter, Lücke, & Schmidt-Thieme, 2015). In the semisupervised setting, approaches with large numbers of free parameters face the additional challenge of parameter tuning using very few labels, which, for example, increases the risk of heavily overfitting to a subsequently very small validation set. Large sets of free parameters can thus negatively affect the applicability of a given system in the limit of few labels. The same applies to more principled combinations of supervised and unsupervised networks, for example, in the form of generative adversarial networks (GANs; Goodfellow et al., 2014; Salimans et al., 2016), which maintain large sets of free parameters of their constituting neural network components.

Alternatives to large hybrid approaches are classifiers derived from standard support vector machines (SVMs; Cortes & Vapnik, 1995). The transductive SVM (TSVM; Vapnik, 1998; Collobert, Sinz, Weston, & Bottou, 2006) was specifically derived for the semisupervised setting, and SVMs typically have comparably few free parameters. For supervised tasks, large DNNs are, however, often preferred because of their favorable scaling with the number of data points. While training of DNNs scales coarsely linearly with the size of the training data, SVMs typically scale approximately quadratically. As the same applies for TSVMs, it becomes difficult to leverage large numbers of unlabeled data.

Another alternative to large hybrid approaches is a standard probabilistic network, for example, in the form of deep directed graphical models (DDMs). DDMs are well suited to capture the rich structure of typical data, such as text documents, medical data, images and speech, and they can, in principle, be trained using unlabeled and labeled data points. Training of DDMs also scales efficiently with the number of data points (typically each learning iteration scales linearly with the number of data points). However, while being potentially very powerful information processors, typical directed models are limited in size. For instance, deep sigmoid belief networks (SBNs; Saul, Jaakkola, & Jordan, 1996; Gan, Henao, Carlson, & Carin, 2015) or newer models such as NADE (Larochelle & Murray, 2011) have been trained with only a couple of hundred to about a thousand hidden units (Bornschein & Bengio, 2015; Gan et al., 2015). Their scalability with regard to the number of neurons is thus more limited than standard discriminative DNNs, which often owe their competitive performance to their size.

In contrast to DNNs and SVMs, which are representatives of supervised learning, DDMs are primarily used for unsupervised learning. For the targeted limit of few labels, DDMs thus appear as a more natural starting point if we are able to address scalability for classification applications. In order to do so, we base our study on a directed graphical model that is sufficiently richly structured to give rise to a good classifier, while it allows for efficient training on large data sets and with large network sizes. Scalability will be realized by the derivation of a neural network equivalent for maximum likelihood learning of the graphical model. The emerging concise and local inference and learning equations of the network can then be parallelized and scaled using the same tools as were originally developed for conventional deep neural networks. By additionally considering a minimalistic network architecture, the number of free parameters will, at the same time, be kept low and easily tunable on few labels.

## 2  A Hierarchical Mixture Model for Classification

A classification problem can be modeled as an inference task based on a probabilistic mixture model (e.g., Duda, Hart, & Stork, 2001). Such a model can be hierarchical, or deep, if we expect the data to obey a hierarchical structure. For handwritten digits, for instance, we first assume the data to be divided into digit classes (0 to 9), and within each class, we expect a structure that distinguishes among different writing styles. Most deep systems allow for a much deeper substructure, using 5, 10, or, recently, even up to 100 or 1000 layers (He, Zhang, Ren, & Sun, 2016; Huang, Sun, Liu, Sedra, & Weinberger, 2016). For our goal of semisupervised learning with few labels, however, we want to restrain the model complexity to the necessary minimum of a hierarchical model.

### 2.1  The Generative Model

In accordance with the hierarchical formulation of a classification problem, we define the minimalistic hierarchical generative model shown in Figure 1 as follows:
$p(k)=1/K,p(l|k)=δlk$
(2.1)
$p(c|k,R)=Rkc,∑cRkc=1$
(2.2)
$p(y→|c,W)=∏dPoisson(yd;Wcd).∑dWcd=A.$
(2.3)

The parameters of the model, $W∈R>0C×D$ and $R∈R≥0K×C$, will be referred to as generative weights, which are normalized to constants $A$ and 1, respectively. The top node (see Figure 1) represents $K$ abstract concepts or superclasses $k$ (e.g., 10 classes of digits). The middle node represents any of the occurring $C$ subclasses $c$ (e.g., different writing styles of the digits). And the bottom nodes represent an observed data sample $y→$ with an according data label $l$ (e.g., ranging from 0 to 9). To generate an observation $y→$ from the model, we first draw a superclass $k$ from a uniform categorical distribution $p(k)$. Next, we draw a subclass $c$ according to the conditional categorical distribution $p(c|k,R)$. Given the subclass, we then sample $y→$ from a Poisson distribution. For labeled data, we assign to it the label $l$ of class $k$ via a Kronecker delta, that is, without label noise. Equations 2.1 to 2.3 define a minimalistically deep mixture model.

Figure 1:

Graphical illustration of the hierarchical generative model.

Figure 1:

Graphical illustration of the hierarchical generative model.

Our model assumes nonnegative observed data, and we use the Poisson distribution as an elementary distribution for nonnegative observations. Nonnegative data represent a natural type of data, and examples include bag-of-words representations of text documents or light-intensity representations of images including standard pattern recognition benchmarks such as MNIST (LeCun, Bottou, Bengio, & Haffner, 1998) or NIST (Grother, 1995). While bag-of-words data may directly motivate a Poisson distribution (for word counts), the model in principle will be applicable to any kind of nonnegative data. An important difference between models that assume Poisson distributed data and models using the very common assumption of gaussian distributed observables is the implicitly assumed similarity relation between data points. The assumption of gaussian observables is, for example, naturally linked to the assumption of Euclidean distances (i.e., squared coordinate-wise differences). For such models (including many deep neural networks), the classification problem is usually unaffected by global shifts of data points, and the origin of the data space has no dedicated meaning. This is no longer the case for the Poisson noise model. For nonnegative data, a zero-valued observation has a special meaning, and any addition of a fixed value changes this meaning (e.g., the difference between a word count of zero and a word count of 10 words conveys a different meaning from the difference between a word count of 100 and 110). Instead of Euclidean distances, Poisson noise links to distances defined by the Kullback-Leibler divergences (e.g., Cemgil, 2009), which are more similar to those used for nonnegative matrix factorization (compare Lee & Seung, 1999). In addition to being a natural choice for nonnegative data, the Poisson distribution used here also turns out to be mathematically convenient for deriving inference and learning rules. Similar observations have been made by Keck, Savin, and Lücke (2012), who used Poisson observables to derive local learning rules for shallow neural network models (also compare Lücke & Sahani, 2008; Nessler, Pfeiffer, & Maass, 2009, 2013).

Considering the generative model in equations 2.1 to 2.3, the normalization of the rows of $R$ in equation 2.2 is required for normalized categorical distributions. The normalization of the rows of $W$ in equation 2.3, however, represents an additional assumption of our approach. We can enforce such an assumption of contrast normalized data by preprocessing nonnegative data as follows,
$yd=(A-D)y~d∑d'y~d'+1,$
(2.4)
where $y~→$ denotes the unnormalized data point, $D$ is the dimensionality of the input data, and $A$ is a free normalization parameter. The normalization, equation 2.4, serves two purposes: for sufficiently large data dimensionality $D$, the generated data fulfill the required constraint $∑dyd=A$ with high accuracy and the normalization adds a fixed offset of $+1$, which makes the model more robust by avoiding that zeros in the data result in hard zero-or-one probabilities of the Poisson distribution. Originally, the normalization equation 2.4 was introduced to avoid neurobiologically implausible negative weights for synaptic connections in the neural network model of Keck, Savin, and Lücke (2012).

### 2.2  Maximum Likelihood Learning

To infer the model parameters $Θ=(W,R)$ of the hierarchical Poisson mixture model, equations 2.1 to 2.3, for a given set of $N$ independent observed data points ${y→(n)}n=1,…,N$ with $y→(n)∈R≥0D$, $∑dyd(n)=A$, and labels $l(n)$, we seek to maximize the data (log-)likelihood:
$L(Θ)=log∏n=1Np(y→(n),l(n)|Θ)=∑n=1Nlog∑c=1C∏d=1DWcdyd(n)e-WcdΓ(yd(n)+1)∑ok∈l(n)RkcK.$
(2.5)
Here, we assume that data points may come with or without a label. For unlabeled data, the summation over $k$ is a summation over all possible labels of the given data, that is, $k=1…K$, whereas whenever the label $l(n)$ is known for a data point $y→(n)$, this sum is reduced to $k=l(n)$, such that only weights $Rl(n)c$ contribute for that $n$th data point.
Instead of maximizing the likelihood directly, EM (in the form studied by Neal & Hinton, 1998) maximizes a lower bound—the free energy—given by
$F(Θold,Θ)=∑n=1Nlogp(y→(n),l(n),c,k|Θ)n+H[Θold],$
(2.6)
where $〈〉n$ denotes the expectation under the posterior
$f(c,k)n=∑c=1C∑k=1Kp(c,k|y→(n),l(n),Θold)f(c,k)$
(2.7)
and $H[Θold]$ is an entropy term depending on only parameter values held fixed during the optimization of $F$ with regard to $Θ$. For our model, the free energy as a lower bound of the log likelihood reads
$F(Θold,Θ)=∑n,c,kp(c,k|y→(n),l(n),Θold)(∑d=1D(yd(n)log(Wcd)-Wcd-logΓyd(n)+1)+log(Rkc)-log(K))+H[Θold].$
(2.8)

The EM algorithm optimizes the free energy by iterating two steps. First, given the current parameters $Θold$, the relevant expectation values under the posterior are computed in the E-step. Given these posterior expectations, $F(Θold,Θ)$ is then maximized with regard to $Θ$ in the M-step. Iteratively applying E- and M-steps locally maximizes the data likelihood.

#### 2.2.1  M-Step

The parameter update equations of the model can canonically be derived by maximizing the free energy, equation 2.8, under the given boundary conditions of equations 2.2 and 2.3. By using Lagrange multipliers for constrained optimization, we obtain after straightforward derivations:
$Wcd=A∑np(c|y→(n),l(n),Θold)yd(n)∑d'∑np(c|y→(n),l(n),Θold)yd'(n),$
(2.9)
$Rkc=∑np(k|c,l(n),Θold)p(c|y→(n),l(n),Θold)∑c'∑np(k|c',l(n),Θold)p(c'|y→(n),l(n),Θold).$
(2.10)
For derivation details, refer to section A.1.

#### 2.2.2  E-Step

For the hierarchical mixture model, the required posteriors over the unobserved latents in equations 2.9 and 2.10 can be efficiently computed in closed forms in the E-step. Due to an interplay of the used Poisson distribution and the constraint for $W$ of equation 2.3, the equations greatly simplify and can be shown to follow a softmax function with weighted sums over inputs $yd(n)$ and $uk(n)$ as arguments (see section A.1):
$p(c|y→(n),l(n),Θold)=exp(Ic(n))∑c'exp(Ic'(n)),with$
(2.11)
$Ic(n)=∑dlog(Wcdold)yd(n)+log∑kuk(n)Rkcold,$
(2.12)
$uk(n)=p(k|l(n))=δkl(n)forlabeleddatap(k)=1Kforunlabeleddata.$
(2.13)
Also note that the posteriors $p(c|y→,l,Θ)$ for labeled data and $p(c|y→,Θ)$ for unlabeled data differ only in the chosen distribution for $uk$.
For the E-step posterior over classes $k$, we obtain:
$p(k|c,l(n),Θold)=p(k|l(n))=δkl(n)forlabeleddatap(k|c,Θold)=Rkcold∑k'Rk'coldforunlabeleddata.$
(2.14)
The expression for unlabeled data makes use of the assumption of a uniform prior in equation 2.1. Under the assumption of a nonuniform class distribution, the weights $Rkc$ would be weighted by the priors $p(k)$, which here simply cancel out.

#### 2.2.3  Probabilistically Optimal Classification

Once we have obtained a set of values for model parameters $Θ$ by applying the EM algorithm on training data, we can use such a trained generative model to infer the posterior distribution $p(k|y→,Θ)$ given a previously unseen observation $y→$. For our model, this posterior is given by
$p(k|y→,Θ)=∑cRkc∑k'Rk'cp(c|y→,Θ).$
(2.15)
While this expression provides a full posterior distribution, the maximum a posteriori (MAP) value can be used for deterministic classification.

### 2.3  Truncated Variational EM

Especially for multiple-cause models, computation of full posterior distributions to perform EM updates quickly becomes unfeasible with an increasing number of hidden units. As a solution, truncated approximations of full posterior distributions that only regard those few units that contribute most to the posterior were introduced to achieve scalability in a wide variety of applications (Lücke & Eggert, 2010; Henniges, Turner, Sahani, Eggert, & Lücke, 2014; Sheikh, Shelton, & Lücke, 2014; Dai & Lücke, 2014). For mixture models, truncated approximations also appear as a very natural choice for complexity reduction (for gaussian mixture models see, e.g., Shelton, Gasthaus, Dai, Lücke, & Gretton, 2014; Hughes & Sudderth, 2016): As for typical data, at most a few clusters significantly overlap; it is sufficient to consider only a few clusters in order to approximate the posterior of any given data point well. Setting the posterior values of the not considered clusters to exactly zero can then result in significant reductions of computational costs without a notable loss in classification accuracy. Following Forster and Lücke (2017), such truncated approaches are variational EM approximations that do not assume factored variational approximations, but are proportional to the exact posteriors in low-dimensional subspaces. For the purposes of this letter, we regard truncated distributions of the form
$q(n)(c;K,Θ)=p(c,y→(n)|Θ)∑c'∈K(n)p(c',y→(n)|Θ)δ(c∈K(n)),$
(2.16)
where $δ(c∈K(n))$ is an indicator function: $δ(c∈K(n))=1$ if $c∈K(n)$ and zero otherwise, and $K$ denotes the collection of sets $K=(K(1),…,K(N))$, with one variational parameter $K(n)$ per data point $y→(n)$. The variational distribution gives rise to a free energy of the form
$F(K,Θ)=∑n=1Nlog∑c∈K(n)p(c,y→(n)|Θ),$
(2.17)
which lower-bounds the data log likelihood with regard to the generative model defined by $p(c,y→(n)|Θ)$ (Lücke, 2016). In general, it is more efficient to optimize the free energy $F(K,Θ)$ instead of the log likelihood. For the truncated distributions, equation 2.16, the M-step equations remain unchanged compared to the M-steps of the exact posterior, except that expectation values are now given by (see Lücke & Eggert, 2010; Lücke, 2016)
$g(c)q(n)(c;K,Θ)=∑c∈K(n)p(c,y→(n)|Θ)g(c)∑c'∈K(n)p(c',y→(n)|Θ).$
(2.18)
If the truncated E-step also increases the free energy $F(K,Θ)$ in equation 2.17, then a variational EM algorithm is obtained that increases the lower bound (see equation 2.17) of the likelihood.
For mixture models and given a data point $y→(n)$, we define the set $K(n)$ to consist of $C' states. In this case, expectation values for the M-step have to be computed based on just these $C'$ states in $K(n)$ (see equation 2.18). According to the theoretical results for TV-EM (Lücke, 2016), standard M-steps of a mixture model with truncated expectation values, equation 2.18, increase the free energy, equation 2.17. A procedure that also increases the free energy in the E-step is provided by considering that in equation 2.17, the logarithm is a concave function and its argument is a sum of nonnegative probabilities. Therefore, if we demand that $C'$ remains constant, $F(K,Θ)$ increases whenever we replace a state $c~∈K(n)$ by a new state $c$ previously not in $K(n)$ such that
$p(c,y→(n)|Θ)>p(c~,y→(n)|Θ).$
(2.19)
It is a design choice of the algorithm how much one aims at increasing the free energy based on this criterion. In one extreme, one could terminate updating $K(n)$ in the E-step as soon as one new state is found that has a larger joint than the lowest joint of the states in $K(n)$. However, if the increase in the free energy is too small, the number of iterations until convergence could increase disproportionally to the gained speed-up per iteration. In the other extreme, one could terminate updating $K(n)$ only after $F(K,Θ)$ is fully maximized. For general graphical models, finding the optimal $K(n)$ can be computationally very expensive or even infeasible. Thus, for most applications, it seems most promising to use an operation regime in the middle of these two extremes. For mixture models, however, a full optimization can be obtained because an exhaustive computation of $p(c,y→(n)|Θ)$ for all states (clusters $c$) is possible. The criterion, equation 2.19, then simply translates to defining $K(n)$ for a given $y→(n)$ such that
$∀c∈K(n),∀c~∉K(n):p(c,y→(n)|Θ)>p(c~,y→(n)|Θ),$
(2.20)
subject to $|K(n)|=C'$ for all $n$. That is, we define $K(n)$ to consist of the clusters $c$ with the $C'$ largest joints $p(c,y→(n)|Θ)$. Such defined sets $K(n)$ necessarily maximize the truncated variational E-step, but require at least a partial sorting, which adds to the computational cost. Criterion 2.19 also allows for more efficient procedures that increase instead of maximize the free energy. Also, the constraint of equally sized $K(n)$ for all $n$ could be relaxed.

## 3  A Neural Network for Optimal Hierarchical Learning

For the purposes of this study, we now turn to the task of specifying a neural network formulation that corresponds to learning and inference in the hierarchical generative model of section 2. The study of optimal learning and inference with neural networks is a popular research field, and we here follow an approach similar to Lücke and Sahani (2008), Keck et al. (2012), Nessler et al. (2009), and Nessler, Pfeiffer, Buesing, and Maass (2013).

### 3.1  A Neural Network Approximation

Consider the neural network in Figure 2 with neural activities $y→$, $s→$, and $t→$. We refer to neurons $y1…D$ as the observed layer, the neurons $s1…C$ as the first hidden layer, and the neurons $t1…K$ as the second hidden layer. We assume the values of $y→$ to be obtained from a set of unnormalized data points $y~→$ by equation 2.4, and the label information to be presented as top-down input vector $u→$ as given in equation 2.13. Furthermore, we assume the neural activities $s→$ and $t→$ to be normalized to $B$ and $B'$, respectively (such that $∑dyd=A$, $∑kuk=1$, $∑csc=B$, and $∑ktk=B'$, with $A>D$; $B,B'>0$). For the neural weights $(W,R)$ of the network, which we distinguish for now from the generative weights ($W,R$) of the mixture model, we consider Hebbian learning with a subtractive synaptic scaling term (see, e.g., Abbott & Nelson, 2000):
$ΔWcd=∊W(scyd-scWcd)$
(3.1)
$ΔRkc=∊R(tksc-tkRkc),$
(3.2)
where $∊W>0$ and $∊R>0$ are learning rates. These learning rules are local, can integrate both supervised and unsupervised learning, are highly parallelizable, and result in normalized weights that we can relate to our generative model as follows: By taking sums over $d$ and $c$, respectively, we observe that the learning dynamics results in $∑dWcd$ to converge to $A$ and $∑cRkc$ to converge to $B$ (due to activities $y→$ and $s→$ being normalized accordingly). If we therefore now assume the weights $W$ and $R$ to be normalized to $A$ and $B$, respectively, we can compute how a given weight adapts with cumulative learning steps. For small learning rates, we can approximate the weight updates by $ΔWcd=∊Wscyd$ and $ΔRkc=∊Rtksc$ followed by explicit normalization to $A$ and $B$, respectively. Using the superscript $(n)$ to denote the parameter states and activities of the network at the $n$th learning step, we can write the effect of such subsequent weight updates as
$Wcd(n+1)=AWcd(n)+∊Wsc(n)yd(n)∑d'Wcd'(n)+∊Wsc(n)yd'(n)andRkc(n+1)=BRkc(n)+∊Rtk(n)sc(n)∑c'Rkc'(n)+∊Rtk(n)sc'(n),$
(3.3)
where $sc(n)=sc(y→(n),u→(n),W(n),R(n))$ denotes the activation of neurons $sc$ at the $n$th iteration, which depends on inputs $y→(n)$, $u→(n)$ and the weights $W(n),R(n)$. Similarly, $tk(n)=tk(s→(n),u→(n),R(n))$ depends on $s→(n)$, $u→(n)$, and $R(n)$. By iteratively applying equations 3.3 for $N$ times, we can obtain formulas for the weights $W(N)$ and $R(N)$—the weights after having learned from $N$ data points. If learning converges and $N$ is large enough, these can be regarded as the converged weights. It turns out that the emerging large nested sums can, at the point of convergence, be compactly rewritten through the use of Taylor expansions and the geometric series. Section A.2 gives details on the necessary analytical steps. As a result, we obtain that the following equations must be satisfied for $W$ and $R$ at convergence:
$Wcd≈A∑nsc(n)yd(n)∑d∑nsc(n)yd(n)andRkc≈B∑ntk(n)sc(n)∑c∑ntk(n)sc(n).$
(3.4)
Equations 3.4 become exact fixed points for learning in equations 3.1 and 3.2 in the limit of small learning rates $∊W$ and $∊R$ and large numbers of data points $N$. Given the normalization constraints demanded above, equations 3.4 apply for any neural activation rules for $sc$ and $tk$ as long as learning follows equations 3.1 and 3.2 and as long as learning converges.
Figure 2:

Graphical illustration of the hierarchical recurrent neural network.

Figure 2:

Graphical illustration of the hierarchical recurrent neural network.

For our purpose, we now identify $sc$ with the posterior probability $p(c|y→,l,Θ)$ for labeled data and $p(c|y→,Θ)$ for unlabeled data given by equations 2.11 to 2.13 with $Θ=(W,R)$:
$sc(n):=p(c|y→(n),l(n),Θ(n))=exp(Ic(n))∑c'exp(Ic'(n)),with$
(3.5)
$Ic(n)=∑dlog(Wcd(n))yd(n)+log∑kuk(n)Rkc(n)$
(3.6)
and $uk(n)$ as given by equation 2.13, which incorporates the label information.
Furthermore, we identify $tk$ with the posterior distribution over classes $k$, which for labeled data is $p(k|l)$ given in equation 2.14 and for unlabeled data $p(k|y→,Θ)$ as given by equation 2.15:
$tk(n):=p(k|l(n))=δl(n)kforlabeleddatap(k|y→(n),Θ(n))=∑cRkc(n)∑k'Rk'c(n)sc(n)forunlabeleddata.$
(3.7)

The complete set of activation and learning rules, after identifying neural activities $sc$ and $tk$ with the respective posterior distributions, is summarized in Table 1. By comparing equations 3.4 with the M-step equations 2.9 and 2.10, we can now observe that such neural learning converges to the same fixed points as EM for the hierarchical Poisson mixture model (note that we set $B=B'=1$ as $sc$ and $tk$ sum to one). While the identification of $Wcd$ with $Wcd$ at convergence is straightforward, we have to restrict learning of $Rkc$ to labeled data to gain a neural equivalent in $Rkc$. In that case, $p(k|c,l(n),Θold)=p(k|l(n))$, which corresponds to our chosen activities $tk$ for labeled inputs. (In section 3.3, we will show a way to loosen up on this restriction by using self-labeling on unlabeled data with high inference certainty.)

Table 1:
Neural Network Formulation of Probabilistic Inference and Maximum Likelihood Learning.
 Neural Simpletron Input Bottom up $y~d$ Unnormalized data (T1.1) Top down $uk=δklforlabeleddata1Kforunlabeleddata$ (T1.2) Activation across layers Observation layer $yd=(A-D)y~d∑d'y~d'+1$ (T1.3) First hidden $sc=exp(Ic)∑c'exp(Ic')$⁠, with (T1.4) $Ic=∑dlog(Wcd)yd+log(∑kukRkc)$ (T1.5) Second hidden $tk=ukforlabeleddata∑cRkc∑k'Rk'cscforunlabeleddata$ (T1.6) Learning of neural weights First hidden $ΔWcd=∊W(scyd-scWcd)$ (T1.7) Second Hidden $ΔRkc=∊R(tksc-tkRkc)$ for labeled data (T1.8)
 Neural Simpletron Input Bottom up $y~d$ Unnormalized data (T1.1) Top down $uk=δklforlabeleddata1Kforunlabeleddata$ (T1.2) Activation across layers Observation layer $yd=(A-D)y~d∑d'y~d'+1$ (T1.3) First hidden $sc=exp(Ic)∑c'exp(Ic')$⁠, with (T1.4) $Ic=∑dlog(Wcd)yd+log(∑kukRkc)$ (T1.5) Second hidden $tk=ukforlabeleddata∑cRkc∑k'Rk'cscforunlabeleddata$ (T1.6) Learning of neural weights First hidden $ΔWcd=∊W(scyd-scWcd)$ (T1.7) Second Hidden $ΔRkc=∊R(tksc-tkRkc)$ for labeled data (T1.8)

In other words, by executing the online neural network of Table 1, we optimize the likelihood of the generative model, equations 2.1 to 2.3. The network's neural activities provide the posterior probabilities, which we can, for example, use for classification. The computation of posteriors is in general a difficult and computationally intensive endeavor, and their interpretation as neural activation rules is usually difficult. In our case, because of a specific interplay between introduced constraints, categorical distribution, and Poisson noise, the posteriors, and their neural interpretability greatly simplify, however.

All equations in Table 1 can directly be interpreted as neural activation or learning rules. Let us consider an unnormalized data point $y~→=(y~1,…,y~D)T$ as bottom-up input to the network. Labels are neurally coded as top-down information $u→=(u1,…,uK)T$, where only the entry $ul$ equals one if $l$ is the label and all other units are zero.1 In the case of unlabeled data, all labels are assumed as equally likely at $1/K$. As the first processing step, a divisive normalization, equation T1.3, is executed to obtain activations $yd$. Considering equations T1.4 and T1.5, we can interpret $Ic$ as input to neural unit $sc$. The input consists of a bottom-up and a top-down activation. The bottom-up input is the standard weighted summation of neural networks $∑dlog(Wcd)yd$ (note that we could redefine the weights by $W~cd:=logWcd$). Likewise, the top-down input is a standard weighted sum, $∑kukRkc$ but affects the input through a logarithm. Both sums can be computed locally at the neural unit $c$. The inputs to the hidden units $sc$ are then combined using a softmax function, which is also standard for neural networks. However, in contrast to discriminative networks, the weighted sums and the softmax function are here a direct result of the correspondence to a generative mixture model (compare also Jordan & Jacobs, 1994). The activation of the top layer, equation T1.6, is either directly given by the top-down input $uk$ if the data label is known. For unlabeled data, the inference again takes the form of a weighted sum over bottom-up inputs, which are now the activations $sc$ from the middle layer. Regarding learning, both equations T1.7 and T1.8 are local Hebbian learning equations with synaptic scaling. The weights of the first hidden layer are updated on all data points during learning, while those of the second hidden layer learn only from labeled input data.

Other kinds of generative layers could be imagined that can (depending on the data) be more suitable—for example, GMMs for not necessarily nonnegative data in an Euclidean space or generative convolutional models (see, e.g., Dai, Exarchakis, & Lücke, 2013; Gal & Ghahramani, 2016; Patel, Nguyen, & Baraniuk, 2016) to exploit prior knowledge about image data. Derivations of corresponding simpletron layers are, however, not necessarily as straightforward as for the Poisson model. M-steps that adhere to the form of equations 3.4 are not necessarily generally given and may require further approximations or modified neural learning rules to allow for the identification of EM and neural network fixed points. Similarly, neural activation rules for gaussian noise would be different from those in equations 3.5 and 3.6 (which follow from the Poisson assumption). Instead of the standard sums of weights in equation 3.6, a gaussian noise assumption would result in activations proportional to squared distances between cluster centers and data points.

As control of our analytical derivations above, Figure 3 shows a direct comparison of the likelihood using EM equations 2.9 and 2.10 and the corresponding neural learning rules in Table 1. We here used the MNIST data set as an example and trained both EM and the network with $C=1000$ and $A=900$. The scale of the learning rates of the network $∊W$ and $∊R$ was set to produce comparable training iterations as EM. We then verified numerically that the local optima of the neural network are indeed approximate local optima of the EM algorithm and vice versa. Note in this respect that although neural learning has the same convergence points as EM learning for the mixture model, in finite distances from the convergence points, neural learning follows different gradients, such that the trajectories of the network in parameter space are different from EM. By adjusting the learning rates in equations T1.7 and T1.8, the gradient directions can be changed in a systematic way without changing the convergence points, which we observed to be beneficial to avoid convergence to shallow local optima.

Figure 3:

Comparison of EM and neural simpletrons. Shown are the mean log likelihoods for both algorithms on the MNIST data set over 10 runs. The inlaid plot shows a finer scale for the $y$-axis using the same $x$-axis. Errors of the means are too small to be visible.

Figure 3:

Comparison of EM and neural simpletrons. Shown are the mean log likelihoods for both algorithms on the MNIST data set over 10 runs. The inlaid plot shows a finer scale for the $y$-axis using the same $x$-axis. Errors of the means are too small to be visible.

The equations defining the neural network are elementary, very concise, and contain a only four free parameters: the number of hidden units $C$, an input normalization constant $A$, and learning rates $∊W$ and $∊R$. Because of its concise form we call the network neural simpletron (NeSi).

In the experiments in section 4, we differentiate between five neural network approximations on the basis of Table 1. These result from two different approximations of the activations in the first hidden layer, two different approximations for the activations in the second hidden layer, and a truncated network approximation. These approximations are discussed in sections 3.2, 3.3, and 3.4, respectively.

### 3.2  Recurrent, Feedforward, and Greedy Learning

The complete formulas for the first hidden layer, given in equations T1.4 and T1.5, define a recurrent network, that is, a network that combines both bottom-up and top-down information. The first summation in $Ic$ incorporates the bottom-up information. Due to the chosen normalization in equation T1.3 with a background value of $+1$, all summands in this term are nonnegative. Values of the sum over these bottom-up connections will be high for input data $y→$ generated by the hidden unit $c$. The second summation in $Ic$ incorporates top-down information. The weighted sum inside the logarithm, which can take the label information into account, will always yield values between zero and one. Thus, because of the logarithm, this second term is always nonpositive and suppresses the activation of the unit. This suppression is stronger, the less likely it is, that the given hidden unit $c$ belongs to the class of the provided label $l$ (for labeled data) and the less likely it is, that this unit becomes active at all. Because of these recurrent connections between the first and second hidden layers, we refer to our method in Table 1 as r-NeSi (“r” for recurrent) in the experiments. With “recurrent,” we do not mean a temporal memory of sequential inputs but the direction in which information flows through the network (following, for example, the definition of recurrent by Dayan & Abbott, 2001).

To investigate the influence of such recurrent information in the network, we also test a pure feedforward version of the first hidden layer. There, we remove all top-down connections by discarding the second term in equation T1.5. Such a feedforward formulation of the network is equivalent to treating the distribution $p(c|k,R)$ in the first hidden layer as a uniform prior distribution $p(c)=1/C$. We refer to this feedforward network as ff-NeSi in the experiments. Since ff-NeSi is stripped of all top-down recurrence and the fixed points of the second hidden layer now depend only on the activities of the first hidden layer at convergence, it can also be trained disjointly using a greedy layer-by-layer approach, which is customary for deep networks (e.g., Hinton, Osindero, & Teh, 2006).

### 3.3  Self-Labeling

So far, we trained the top layer of NeSi completely supervised by updating the weights in equation T1.8 only on labeled data. When labeled data are sparse, it could be beneficial to also make use of unlabeled data in this layer. We can do so by letting the network itself provide the missing labels (a procedure often termed “self-labeling”; see, e.g., Lee, 2013; Triguero, García, & Herrera, 2015). The availability of the full posterior distribution in the network (see equation T1.6 for unlabeled data) allows us to selectively use only those inferred labels where the network shows a very high classification certainty. As index for decision certainty, we use the best versus second best ($BvSB$) measure on $tk$, which is the absolute difference between the most likely and the second most likely prediction. Such a measure gives a sensible indicator for high skewness of the distribution toward a single class (Joshi, Porikli, & Papanikolopoulos, 2009). If the $BvSB$ lies above some threshold parameter $ϑ$, which we treat as an additional free parameter, we approximate the full posterior in $tk$ by the MAP estimate. In that case, we set $tk→MAP(tk)$, such that $tk$ for unlabeled data now holds the one-hot coded inferred label information, with which we can then update the top layer in the usual fashion using equation T1.8.

This specific manner of using inferred labels in the neural network is again not imposed ad hoc but can be derived from the underlying generative model by considering the M-step, equation 2.10, for unlabeled data. When in the generative model the posterior $p(k|y→,Θ)=∑cp(k|c,Θ)p(c|y→,Θ)$ comes close to a hard max, it must be that the summation is dominated by summands that all belong to the same class, that $p(c|y→,Θ)$ is at high values only for those subclasses $c$ that all have high values $p(k/cΘ)$ for the some class $k$. For these units, we can then replace $p(k|c,Θ)$ by the MAP estimate in close approximation. We can therefore rewrite the products in equation 2.10 for unlabeled data as
$p(k|c,Θ)p(c|y→(n),Θ)≈δkl~p(c|y→(n),Θ)∀n∈N:p(k|y→(n),Θ)≈δkl~,$
(3.8)
with the inferred label $l~$. Here, for all data points $n∈N$ with high classification certainty, $p(c|y→(n),Θ)$ acts as a filter, such that only those terms contribute, where $p(k|c,Θ)$ is close to a hard max. With this approximation, we can replace the dependency of the first factor in equation 3.8 on specific units $c$ by a common dependency on all units that are connected to unit $k$ (as the inferred label $l~$ depends on all those units). These results we are then able to translate again into neural learning rules, where the top layer activation is dependent on only the combined input to that unit, as done above.

We mark those NeSi networks where we use self-labeling in the top layer with a superscript $+$ (i.e., r$+$-NeSi and ff$+$-NeSi). Although we here use the MAP estimate of $tk$ during training, because of the validity of equation 3.8 at high inference certainty, we are still learning in the context of the generative model, equations 2.1 to 2.3. Thus, we still keep the full posterior distribution in $tk$ for inference, as well as all identifications of section 3.1.

### 3.4  Truncated Simpletrons

Based on the close association of the neural network to the generative mixture model, an application of TV-EM (see section 2.3) to neural simpletrons is straightforward. For this, we use the feedforward formulation, which allows considering the input and first hidden layer to be optimized separately from the second hidden layer (and separately from self-labeling approaches). Learning of the weights $W$ then follows the unsupervised likelihood optimization of a (nonhierarchical) normalized Poisson mixture model. The criterion (see equation 2.19) for selecting states $c∈K(n)$ and $c~∉K(n)$ of the truncated mixture model then reduces as follows:
$p(c,y→(n)|Θ)>p(c~,y→(n)|Θ)⇔log∏dPois(yd;Wcd)1C>log∏dPois(yd;Wc~d)1C⇔∑dydlog(Wcd)-log(Γ(yd+1))-Wcd>∑dydlog(Wc~d)-log(Γ(yd+1))-Wc~d⇔∑dydlog(Wcd)-∑dWcd>∑dydlog(Wc~d)-∑dWc~d⇔∑dlog(Wcd)yd=Ic>Ic~=∑dlog(Wc~d)yd,$
(3.9)
where the last step is a consequence of the normalized weights, equation 2.3, used for the mixture model. Note that for discrete $yd$, the gamma function equals $Γ(yd+1)=yd!$.

Considering equation 3.9 it is hence sufficient to only compare the first hidden layer inputs $Ic$ for each data point $y→(n)$ in order to construct sets $K(n)$. Sets that maximize the free energy in the E-step are consequently obtained by selecting those $C'$ clusters $c$ with the highest values $Ic$. In the mixture model, approximate truncated posteriors are then obtained by setting all posteriors $p(c|y→(n),Θ)$ for $c∉K(n)$ to zero and renormalizing $p(c|y→(n),Θ)$ to sum to one.

For application of TV-EM to neural simpletrons, replacing the exact posteriors with truncated posteriors is straightforward. As the posteriors $p(c|y→(n),Θ)$ are represented by the activities $sc$ of the first hidden layer, the computation of these activities (see equations T1.4 and T1.5) simply change to take the following form:
$ComputeIc(n)=∑dlog(Wcd(n))yd(n)(asforff-NeSi),$
(3.10)
$DefineK(n)s.t.∀c∈K(n),∀c~∉K(n):Ic(n)>Ic~(n),$
(3.11)
$Computesc(n)=exp(Ic(n))∑c'∈K(n)exp(Ic'(n))δ(c∈K(n)),$
(3.12)
where $K(n)$ is an index set of size $|K(n)|=C'$. The results on the equivalence between neural network learning and EM learning directly carry over to truncated learning. Now the neural network can be shown to optimize the truncated free energy, equation 2.17, using the truncated inference, equations 3.10 to 3.12, instead of equations T1.4 and T1.5 and the unchanged learning, equations T1.6 to T1.8. We will refer to a simpletron with truncated middle layer activations, equations 3.12, as truncated neural simpletron (t-NeSi).

#### 3.4.1  Computational Complexity Reduction

Truncated approaches generally reduce the complexity of inference because the number of evaluated hidden states per data point can be drastically reduced (e.g., Lücke & Eggert, 2010; Dai & Lücke, 2014; Sheikh et al., 2014; Lücke, 2016). For mixture models, the reduction of states at first glance does not appear to be very significant (in contrast to multiple-causes models), as the number of hidden states scales linearly with the number of hidden variables. However, the exact zeros for posterior probabilities also result in a large reduction of computational cost in our case. equation 3.10 for $Ic$ is here still computed fully, which is of $O(CD)$. But for the updates of weights $(Wcd)$, equation T1.7, the required computations reduce from $O(CD)$ to $O(C'D)$ after truncation, as those $sc$ values that are equal zero result in no changes for their corresponding weights. Furthermore, and less significant, the computations of $sc$ directly reduce from $O(C)$ to $O(C')$. Even with the fully computed $Ic$, we thus still reduce the computational cost by a number of numerical operations per data point proportional to $(C-C')D$. If considering that the additionally needed operations to find the largest $C'$ elements are typically of order $O(C+C'logC)$ per data point (Lam & Ting, 2000) or just $O(C)$ (Blum, Floyd, Pratt, Rivest, & Tarjan, 1973), we can expect to reduce the required overall operations for t-NeSi by a large fraction compared to nontruncated NeSi networks.

For experiments with large numbers of hidden units (namely, on MNIST and NIST SD19), we perform additional experiments using the t-NeSi networks to investigate the benefits of such truncated approaches. These networks have one additional free parameter $C'$ that primarily depends on the relationship of the clusters in the data themselves rather than on the other network parameters. Furthermore, as will be shown on MNIST, tuning of this parameter is still possible with very few labels and can even be done directly on the unsupervised likelihood of the first hidden layer with no validation set at all.

## 4  Numerical Experiments

We apply an efficiently scalable implementation of our network to three standard benchmarks for classification on nonnegative data:2 the 20 Newsgroups text data set (Lang, 1995), the MNIST data set of handwritten digits (LeCun et al., 1998), and the NIST Special Database 19 of handwritten characters (Grother, 1995). To investigate the task of learning from few labels, we randomly divide the training parts of the data sets into labeled and unlabeled partitions, where we make sure that each class holds the same number of labeled training examples if possible. We repeat experiments for different proportions of labeled data and measure the classification error on the blind test set. For all such settings, we report the average test error over a given number of independent training runs with new random labeled and unlabeled data selection. Details on parallelization and weight initialization are in appendix B. Detailed statistics of the obtained results are in appendix C.

### 4.1  Parameter Tuning

For the basic NeSi algorithms, we have four free parameters: the normalization constant $A$ in the bottom layer, the number of hidden units $C$ and the learning rate $∊W$ in the middle layer, and the learning rate $∊R$ in the top layer. The optional self-labeling and truncation procedures to further improve learning will add a fifth and sixth free parameter, respectively. The parameter $ϑ$ will set the $BvSB$ threshold for self-labeling (top layer), and the parameter $C'$ will set the number of considered middle-layer units for truncated learning.

To optimize the free parameters in the semisupervised setting with only a few labeled data points, it is customary to use a validation set, which comprises additional labeled data to the available amount of labels in the training set of that given setting (e.g., using a validation set of 1000 labeled data points to tune parameters in the setting of 100 labels). As this procedure does not guarantee that the resulting optimal parameter setting could have also been found with the limited number of labels in the given training setting, such achieved results reflect more of the performance limit of the model than the actual performance when given only very restricted numbers of labeled data. As already done in Forster et al. (2015), we therefore train our model given a strictly limited total number of labels for the complete tuning and training procedure in order to address our goal. This implies that we also have to tune all free parameters in the same setting as for training without any additional labeled data. In doing so, we make sure that our results are achievable by using no more labels than provided within each training setting. Furthermore, using only training data for parameter optimization ensures a fully blind test set, such that the test error gives a reliable index for generalization.

To construct the training and validation set for parameter tuning, we consider the setting of 10 labeled training data points per class (i.e., 200 labeled data points for 20 Newsgroups and 100 labeled data points for MNIST). This is the setting with the lowest number of labels on which models are generally compared on MNIST. For simplicity, we take half of these labeled data as the validation set (class balanced and randomly drawn) and use the other labeled half plus all unlabeled training data as the training set for parameter tuning. With this data split, we optimize the parameters of the r-NeSi network via a coarse manual grid search. For the search space, we may consider run time versus performance trade-offs where necessary (e.g., with an upper bound on the network size and a lower bound on the learning rates). Keeping the optimized parameter setting of r-NeSi fixed, we optimize only $ϑ$ for r$+$-NeSi. For comparison, we keep the same parameter settings for the feedforward networks (ff-NeSi and ff$+$-NeSi) without further optimization. Finally, for the truncation parameter $C'$ of t-NeSi, we again optimize only $C'$ and keep all other parameters fixed.

Once optimized in this semisupervised setting, we keep the free parameters fixed for all following experiments. When evaluating the performance of the networks, we perform repeated experiments with different sets of randomly chosen training labels. This evaluation scheme is possible only with more labels available than used by each single network. However, this procedure is purely to gather meaningful statistics about the mean and variance of the acquired results, as these can vary based on the set of randomly chosen labels. As the experiments are performed independently of each other and the parameters are not further tuned based on these results on the test set, it is safe to say that the acquired results are a statistical representation of the performance of our models given no more than the corresponding number of labels in each setting.

A more rigorous parameter tuning would also allow for retuning of all parameters for each model and each new label setting, making use of the additional training label information in the settings where more that 100 labels are available, which we, however, refrained to do for our purposes. The overall tuning, training, and testing protocol is shown in Figure 4.

Figure 4:

Tuning, training, and testing protocol for the NeSi algorithms. During tuning, the free parameters are optimized on a split of the training data into a training and validation set with five randomly chosen labeled data points per class in each and all remaining unlabeled data points in the training set. These data sets with their chosen labels remain fixed during all tuning iterations. With the resulting set of optimized free parameters, the network is then trained on all available training data and labels in the given setting and is evaluated on the fully blind test set. This last training and testing step is repeated with a new, randomly chosen class-balanced set of training labels for multiple independent iterations to gain the mean generalization error of the algorithms.

Figure 4:

Tuning, training, and testing protocol for the NeSi algorithms. During tuning, the free parameters are optimized on a split of the training data into a training and validation set with five randomly chosen labeled data points per class in each and all remaining unlabeled data points in the training set. These data sets with their chosen labels remain fixed during all tuning iterations. With the resulting set of optimized free parameters, the network is then trained on all available training data and labels in the given setting and is evaluated on the fully blind test set. This last training and testing step is repeated with a new, randomly chosen class-balanced set of training labels for multiple independent iterations to gain the mean generalization error of the algorithms.

### 4.2  Document Classification (20 Newsgroups)

The 20 Newsgroups data set in the bydate version consists of 18,774 newsgroup documents of which 11,269 form the training set and the remaining 7505 form the test set. Each data vector comprises the raw occurring frequencies of 61,188 words in each document. We preprocess the data using only tf-idf weighting (Sparck Jones, 1972). No stemming, removals of stop words, or frequency cutoffs were applied. The documents belong to 20 different classes of newsgroup topics that are partitioned into six different subject matters (comp, rec, sci, forsale, politics, and religion). We show experiments for both classification into subject matter (6 classes) as well as the more difficult full 20-class problem.

#### 4.2.1  Parameter Tuning on 20 Newsgroups

In the following, we give a short overview over the parameter tuning on the 20 Newsgroups data set. We use the procedure described in section 4.1 to optimize the free parameters of NeSi using only 200 labels in total while keeping a fully blind test set. The parameters are optimized with respect to the more common 20-class problem, and we then keep the same parameter setting also for the easier 6-class task. We allowed training time over 200 iterations over the whole training set and restricted the parameters in the grid search such that sufficient convergence was given within this limitation.

Hidden units. Following the above tuning protocol for 20 Newsgroups (20 classes) results in a best-performing architecture of $D$$C$$K$$=$ 61188–20–20, that is, the complete setting $C=K=20$. Generally we would expect that the overcomplete setting $C>K$ would allow for more expressive representations. This is indeed the case for the 6-class problem ($K=6$), for which we find that $C=20$ (61188–20–6) is the still best setting. For the 20-class problem, however, more than $K$ middle-layer classes were not beneficial. Using more than 20 middle-layer units ($C>20$) for the $K=20$ problem could be hindered here by the high dimensionality of the data relative to the number of available training data points, as well as the prominent noise when taking all words of a given document into account.

Normalization. Because of the introduced background value of $+$1 (see equation T1.3), the normalization constant $A$ has a lower bound in the dimensionality of the input data $D=61,188$. For very low values $A≳D$, the model is unable to differentiate the observed patterns from background noise. At the other extreme, at $A→∞$, the softmax function will converge to a winner-take-all maximum function. The optimal value lies in between, closely after the system is able to differentiate all classes from background noise but when the normalization is still low enough to allow for a broad softmax response. For all our experiments on the 20 Newsgroups data set, we chose (following the tuning protocol) $A=80,000$ (that is, $A/D≈1.31$).

Learning rates. A relatively high learning rate in the first hidden layer ($∊W=5×C/N$), coupled with a relatively much lower learning rate in the second hidden layer ($∊R=0.5×K/L$), yielded the best results on the validation set. Especially the high value for $∊W$ seems to have the effect of more efficiently avoiding shallow local optima, which exist, again, due to noise and the high dimensionality of the data compared to the relatively low number of training samples. The different learning rates for $∊W$ and $∊R$ mean that the neural network follows a gradient markedly different from an EM update. This suggests that the neural network allows for improved learning compared to the EM updates it was derived from.

Note that in practice, we use normalized learning rates. The factor $C/N$ for the first hidden layer and $K/L$ for the second hidden layer represents the average activation per hidden unit over one full iteration over a data set of $N$ data points with $L$ labels. Tuning not the absolute learning rate but the proportionality to this average activation helps to decouple the optimum of the learning rates from the network size ($C$ and $K$) and the numbers of available training data and labels ($N$ and $L$).

BvSB threshold. Given the optimized values of the other free parameters, we found that introducing the additional self-labeling for unlabeled data is not helpful and even harmful for the 20 Newsgroups data set. Since even in the settings with only very few labeled data points, the number of provided labels per middle-layer hidden unit is already sufficiently large, the use of inferred labels only introduces destructive noise. The self-labeling will be more useful in scenarios where the number of hidden units surpasses the number of available labeled data points greatly (for MNIST, section 4.3, and NIST, section 4.4).

#### 4.2.2  Results on 20 Newsgroups (6 Classes)

We start with the easier task of subject matter classification, where the 20 newsgroup topics are partitioned into six higher-level groups that combine related topics (e.g., comp, rec). The optimal architecture for 20 Newsgroups (20 classes) on the validation set was given in the complete setting, where $C$ = $K$ = 20. At first glance, this seems like no subclasses were learned and that the split in the middle layer was primarily guided by class labels. However, for classification of subject matters (6 classes), where only labels of the six higher-level topics were given, we observed the setting with $C=20$ units (61188-20-6) to be far superior to the complete setting with architecture 61188-6-6 (see Table 2). This suggests that the data structure of 20 subclasses, and not the number of label classes, determines the optimal architecture of the NeSi network (see also sections 4.3 and 4.4). In our experiments, we furthermore observed the feedforward network, which learns completely unsupervised in the middle layer, to still achieve similar performance as the recurrent r-NeSi network. This shows that the NeSi networks are able to recover individual subclasses of the newsgroups data independent of the label information. If more and more labels are available, the recurrent network, however, improves on the feedforward version as the additional top-down label information also leads to further fine-tuning of the learned representations in the middle layer.

Table 2:
Test Error for 10 Independent Runs on the 20 Newsgroups Data Set, when Classes Are Combined by Their Corresponding Subject Matters (Classification into $K=6$ Classes).
 ff-NeSi r-NeSi Number of Labels $C=6,K=6$ $C=20,K=6$ $C=6,K=6$ $C=20,K=6$ 200 41.66 $±$ 1.21 14.23 $±$ 0.45 39.02 $±$ 1.49 14.21 $±$ 0.42 800 40.41 $±$ 1.31 14.04 $±$ 0.48 39.54 $±$ 1.64 14.58 $±$ 0.75 2000 42.31 $±$ 0.72 14.26 $±$ 0.47 40.05 $±$ 0.64 13.44 $±$ 0.43 11269 41.85 $±$ 0.90 14.95 $±$ 0.73 36.56 $±$ 2.09 13.26 $±$ 0.35
 ff-NeSi r-NeSi Number of Labels $C=6,K=6$ $C=20,K=6$ $C=6,K=6$ $C=20,K=6$ 200 41.66 $±$ 1.21 14.23 $±$ 0.45 39.02 $±$ 1.49 14.21 $±$ 0.42 800 40.41 $±$ 1.31 14.04 $±$ 0.48 39.54 $±$ 1.64 14.58 $±$ 0.75 2000 42.31 $±$ 0.72 14.26 $±$ 0.47 40.05 $±$ 0.64 13.44 $±$ 0.43 11269 41.85 $±$ 0.90 14.95 $±$ 0.73 36.56 $±$ 2.09 13.26 $±$ 0.35

Note: The overcomplete setting ($C>K$) shows best results, where the network is able to learn the 20 individual subclasses present in the data.

#### 4.2.3  Results on 20 Newsgroups (20 Classes)

We now continue with the more challenging 20-class problem ($K=20$). Here, we investigate semisupervised settings of 20, 40, 200, 800, and 2000 labels in total—that is 1, 2, 10, 40 and 100 labels per class—as well as the fully labeled setting. For each setting, we present the mean test error averaged over 100 independent runs and the standard error of the mean (SEM). On each new run, a new set of class balanced labels is chosen randomly from the training set. We train our model on the full 20-class problem without any feature selection. An example of some learned weights of r-NeSi is shown in Figure 5.

Figure 5:

Example of learned weights by the r-NeSi algorithm in the semisupervised setting of 800 labels. Shown are the 15 features with the highest learned tf-idf occurrence frequencies for each of the 20 hidden units as bar plot (scaled relative to the most likely feature). Columns next to each field show the corresponding learned class assignment. Each field is labeled by the class $k$ with the highest probability $p(k|c)$ for that field $c$. For that most likely class, the learned probabilities $p(k|c,Θ)$ and $p(c|k,Θ)$ are given.

Figure 5:

Example of learned weights by the r-NeSi algorithm in the semisupervised setting of 800 labels. Shown are the 15 features with the highest learned tf-idf occurrence frequencies for each of the 20 hidden units as bar plot (scaled relative to the most likely feature). Columns next to each field show the corresponding learned class assignment. Each field is labeled by the class $k$ with the highest probability $p(k|c)$ for that field $c$. For that most likely class, the learned probabilities $p(k|c,Θ)$ and $p(c|k,Θ)$ are given.

To the best of our knowledge, most methods that report performance on the same benchmark do consider easier tasks: they either break the task into binary classification between individual or merged topics (e.g., Cheng, Kannan, Vempala, & Wang, 2006; Kim, Der, & Saul, 2014; Wang & Manning, 2012; Zhu, Ghahramani, & Lafferty, 2003), or perform feature selection (e.g., Srivastava, Salakhutdinov, & Hinton, 2013; Settles, 2011) for classification. There are, however works, that are compatible with our experimental setup (Larochelle & Bengio, 2008; Ranzato & Szummer, 2008). A hybrid of generative and discriminative RBMs (HDRBM) trained by Larochelle and Bengio (2008) uses stochastic gradient descent to perform semisupervised learning. They report results on 20 Newsgroups for both supervised and semisupervised setups. In the fully labeled setting, all their hyperparameters are optimized using a validation set of 1691 examples with the remaining 9578 in the training set. In the semisupervised setup, 200 examples were used as a validation set with 800 labeled examples in the training set. To reduce the dimensionality of the input data, they used only the 5000 most frequent words. The classification accuracy of the method is compared in Table 3.

Table 3:
Test Error on 20 Newsgroups for Different Label Settings Using the Feedforward and the Recurrent Neural Simpletrons.
 Number of Labels ff-NeSi r-NeSi HDRBM 20 70.64 $±$ 0.68$(*)$ 68.68$±$ 0.77$(*)$ 40 55.67 $±$ 0.54$(*)$ 54.24$±$ 0.66$(*)$ 200 30.59 $±$ 0.22 29.28$±$ 0.21 800 28.26 $±$ 0.10 27.20$±$ 0.07 31.8$(*)$ 2000 27.87 $±$ 0.07 27.15$±$ 0.07 11,269 28.08 $±$ 0.08 27.28 $±$ 0.07 23.8
 Number of Labels ff-NeSi r-NeSi HDRBM 20 70.64 $±$ 0.68$(*)$ 68.68$±$ 0.77$(*)$ 40 55.67 $±$ 0.54$(*)$ 54.24$±$ 0.66$(*)$ 200 30.59 $±$ 0.22 29.28$±$ 0.21 800 28.26 $±$ 0.10 27.20$±$ 0.07 31.8$(*)$ 2000 27.87 $±$ 0.07 27.15$±$ 0.07 11,269 28.08 $±$ 0.08 27.28 $±$ 0.07 23.8

Notes: We differentiate here between settings with different numbers of labels available during training. For results marked with “(*),” the free parameters of the model were optimized using additional labels: NeSi used the same parameter setting in all experiments on 20 Newsgroups, which was tuned with 200 labels in total; HDRBM used 1000 labels in total for tuning in the semisupervised setting (200 additional labels for the validation set). The numbers in bold are the best performing (in terms of lowest mean error) of the compared systems for each label setting.

Here, the recurrent and feedforward networks produce very similar results, with a small advantage to the recurrent networks. In comparison with HDRBM, ff-NeSi and r-NeSi both achieve better results than the competing model for the semisupervised setting. Both algorithms are still better with down to 200 labels, even though HDRBM uses more labels for training and additional labels for parameter tuning. Performance very significantly decreases only when going down even further to only one or two labels per class for training (note that the parameters were actually tuned using 200 labels in total). In the fully labeled setting, the HDRBM outperforms the shown NeSi approaches significantly. However, so far we have used only one parameter setting for all experiments. Optimizing r-NeSi specifically for the fully labeled setting, we achieve test errors of $(17.85±0.01)%$. For details on the parameter tuning, see section B.5.

### 4.3  Handwritten Digit Recognition (MNIST)

The MNIST data set consists of 60,000 training and 10,000 testing data points of $28×28$ images of gray-scale handwritten digits centered by pixel mass. We perform experiments in the semisupervised setting using 10, 100, 600, 1000, and 3000 labels in total—that is 1, 10, 60, 100, and 300 labels per class—which are randomly and class balanced chosen from the 10 classes. We also consider the setting of a fully labeled training set.

#### 4.3.1  Parameter Tuning on MNIST

We here give a short overview of the parameter tuning on the MNIST data set. We again use the tuning procedure described in section 4.1 to optimize all free parameters of NeSi using only 100 labels in total from the training data, keeping a fully blind test set. We allowed training time over 500 iterations over the whole training set and again restricted the parameters in the grid search such that sufficient convergence was given within this limitation.

Hidden units. Contrary to the 20 Newsgroups data set, for MNIST, the validation error generally decreased with an increasing number of hidden units. We therefore used $C=10,000$ for all our experiments for both the feedforward and the recurrent networks, which we set as an upper limit for network size as a good trade-off between performance and required computing time. However, with so many hidden units on a training set of 60,000 data points and with as few as only 10 labeled training samples in total, overfitting effects have to be taken into consideration. We discuss these more deeply in sections B.3 and B.4. In general, we encountered an increase in error rates on prolonged training times only for the r-NeSi algorithm in the semisupervised settings when no self-labeling was used. For this case only, we devised and used a stopping criterion based on the likelihood of the training data.

Normalization. The dependence of the validation error on the normalization constant $A$ shows similar behavior as for the 20 Newsgroups data set. Following a coarse screening according to the tuning protocol, the setting of $A=900$ (i.e., $A/D≈1.15$) was chosen.

Learning rates. While a high learning rate can be used to overcome shallow local optima, a lower learning rate in general will find more precise optima with the downside of longer training time until convergence. As a trade-off between performance and training time, we chose $∊W=0.2×C/N$ and $∊R=0.2×K/L$ for all experiments on MNIST. Since for networks using self-labeling, the number of effectively used labels $L$ approaches $N$ over time, we scale the learning rate $∊R$ for systems with $K/N$ instead of $K/L$, that is, $∊R=0.2×K/N$ for r$+$- and ff$+$-NeSi.

BvSB threshold. With $C=10,000$ and only 50 labels in total in the training set during parameter tuning, there is only a single label per 200 middle-layer fields available to learn their respective classes. In this setting, using self-labeling on unlabeled data as described in section 3.3 decreased the validation error significantly over the whole tested regime of $ϑ∈[0.1,0.2,…,0.9]$. We chose $ϑ=0.6$ as the optimal value.

Truncation. For t-NeSi, only the additional parameter $C'$ is optimized while keeping the other parameters fixed. Complementary to the reduced computational complexity (see section 3.4), we also observe significantly faster learning times of truncated networks. Figure 6 shows the training likelihood of only the first hidden layer of truncated and nontruncated NeSi for 10 independent runs each (which are however, hardly distinguishable from another at this scale, as the likelihoods within each setting are too close together). As can be seen in the outer plot, the likelihood initially increases faster with lower $C'$. However, when the posterior is truncated too much, the likelihood will converge to significantly lower values (inlaid plot). Notably, here we can also observe that the optimal setting $C'=15$, found via optimization on the validation set, achieves a higher likelihood than all other shown settings, which could allow for parameter tuning based solely on the (unsupervised) likelihood, that is, without any required additional labels.

Figure 6:

Likelihood of first hidden layer for different degrees of truncation. Note the different scalings of the $x$- and $y$-axis in the inset plot.

Figure 6:

Likelihood of first hidden layer for different degrees of truncation. Note the different scalings of the $x$- and $y$-axis in the inset plot.

#### 4.3.2  Results on MNIST

Table 4 shows the results of the NeSi algorithms on the MNIST benchmark. As the NeSi model has no prior knowledge about spatial relations in the data, the given results are invariant to pixel permutation. As can be observed, the basic recurrent network (r-NeSi) results in significantly lower classification errors than the basic feedforward network (ff-NeSi) in the fully labeled setting, as well as settings with 600 labels or fewer. In between those extrema, we find a regime where the feedforward network not only catches up to the recurrent one but even performs slightly better. In the highly overcomplete setting that we use for MNIST, we now also see a significant gain in performance for the semisupervised settings with the additional self-labeling (ff$+$-NeSi and r$+$-NeSi). With these additional inferred labels, the feedforward network surpasses the recurrent version also in the settings with very few labels, down to a single label per class. For this last setting, however, we had to increase the training time to 2000 iterations to ensure convergence, since learning in the top layer with a single label per class per iteration is very slow when not adjusting the learning rate. The best-performing network is the truncated version of the ff$+$-NeSi network. As shown in Figure 6, truncation of the middle layer leads to convergence to optima with on average higher (middle-layer) likelihoods. Especially in settings of very few labels, such improved clustering significantly helps to maintain a low test error in the higher-level classification task.

Table 4:
Test Error on Permutation Invariant MNIST for Different Semisupervised Settings Using the Feedforward and Recurrent Neural Simpletrons in Their Basic Form (ff-NeSi and r-NeSi) and with Self-Labeling (ff$+$-NeSi and r$+$-NeSi) as Well as the Truncated ff$+$-NeSi Network(t-NeSi).
 Number of Labels ff-NeSi r-NeSi ff$+$-NeSi r$+$-NeSi t-NeSi 10 55.46 $±$ 0.57$(*)$ 29.61 $±$ 0.57$(*)$ 10.91 $±$ 0.86$(*)$ 18.68 $±$ 0.89$(*)$ 7.22$±$ 0.53 $(*)$ 20 38.88 $±$ 0.52$(*)$ 21.21 $±$ 0.34$(*)$ 7.23 $±$ 0.35$(*)$ 12.46 $±$ 0.73$(*)$ 6.21$±$ 0.38 $(*)$ 100 19.08 $±$ 0.26 12.43 $±$ 0.15 4.96 $±$ 0.08 4.93 $±$ 0.05 4.23$±$ 0.07 600 7.27 $±$ 0.05 6.94 $±$ 0.05 4.08 $±$ 0.02 4.34 $±$ 0.01 3.65$±$ 0.01 1000 5.88 $±$ 0.03 6.07 $±$ 0.03 4.00 $±$ 0.01 4.26 $±$ 0.01 3.63$±$ 0.01 3000 4.39 $±$ 0.02 4.68 $±$ 0.02 3.85 $±$ 0.01 4.05 $±$ 0.01 3.52$±$ 0.01 60,000 3.27 $±$ 0.01 2.94$±$ 0.01 3.27 $±$ 0.01 2.94$±$ 0.01 2.94$±$ 0.01
 Number of Labels ff-NeSi r-NeSi ff$+$-NeSi r$+$-NeSi t-NeSi 10 55.46 $±$ 0.57$(*)$ 29.61 $±$ 0.57$(*)$ 10.91 $±$ 0.86$(*)$ 18.68 $±$ 0.89$(*)$ 7.22$±$ 0.53 $(*)$ 20 38.88 $±$ 0.52$(*)$ 21.21 $±$ 0.34$(*)$ 7.23 $±$ 0.35$(*)$ 12.46 $±$ 0.73$(*)$ 6.21$±$ 0.38 $(*)$ 100 19.08 $±$ 0.26 12.43 $±$ 0.15 4.96 $±$ 0.08 4.93 $±$ 0.05 4.23$±$ 0.07 600 7.27 $±$ 0.05 6.94 $±$ 0.05 4.08 $±$ 0.02 4.34 $±$ 0.01 3.65$±$ 0.01 1000 5.88 $±$ 0.03 6.07 $±$ 0.03 4.00 $±$ 0.01 4.26 $±$ 0.01 3.63$±$ 0.01 3000 4.39 $±$ 0.02 4.68 $±$ 0.02 3.85 $±$ 0.01 4.05 $±$ 0.01 3.52$±$ 0.01 60,000 3.27 $±$ 0.01 2.94$±$ 0.01 3.27 $±$ 0.01 2.94$±$ 0.01 2.94$±$ 0.01

Notes: We differentiate here between settings with different numbers of labels available during training. For results marked with “(*),” the free parameters were optimized using more labels than available in the given setting. We used the same parameter setting for all experiments shown here, which was tuned using 100 labels. The results are given as the mean and standard error (SEM) over 100 independent repetitions, with randomly drawn class-balanced labels. In the fully labeled case, there are no unlabeled data points to use self-labeling on. Therefore, the results of ff- and ff$+$-NeSi are identical there, as well as those of r- and r$+$-NeSi. Numbers in bold are the best performing (in terms of lowest mean error) of the compared systems for each label setting.

We also performed experiments using semisupervised kNN as a baseline for simple discriminative clustering algorithms. We used a two-stage procedure to make use of unlabeled data for kNN. First, only labeled training examples were used to classify all unlabeled examples (similar to our self-labeling approach but without an uncertainty threshold); then the test data were classified using labeled and self-labeled training data. We optimize four free parameters for kNN: the number of neighbors $n$, the weight function (equal weighting or distance dependent weighting), the algorithm (choice of ball tree, $kd$-tree, brute force, or automated pick of the three), and the power parameter $p$ for the Minkowski metric (where, e.g., $p=1$ recovers the taxicab metric, $p=2$ corresponds to the Euclidean metric, and so forth). For the parameter optimization we used the same tuning procedure and validation set as for the neural simpletrons. We found optimal parameter settings at $n=11$, uniform weighting, automated algorithm pick, and $p=4$.

Figure 7 shows a comparison to kNN and standard and recent state-of-the-art approaches for 100 labels and more. In this comparison (for lack of more comparable findings), all other algorithms use either a validation set with a substantial number of additional labels than available during training or (explicitly) use the test set for parameter tuning. If during parameter tuning, new sets of random labels were chosen between tuning iterations (for the training or validation set), even more labels than we account for in Figures 7 and 8 were actually seen by the algorithm to produce the final results. Also, some of the shown results (the TSVM, AGR, AtlasRBF, and the Em-networks) were achieved in the transductive setting, where the (unlabeled) test data are included into the training process. The NeSi approaches are, to our knowledge, so far the closest to our goal of a competitive algorithm in the limit of as few labels as possible. We explicitly avoided any training or tuning on any additional labeled data or the test set. This also prevents the risk of overfitting to test data. The more complex a system is, the more labels are generally necessary to find optimal parameter settings that are not overfitted to a small validation set and generalize poorly. When using test data during parameter tuning, the danger of such overfitting is even more severe as overfitting effects could be mistaken as good generalizability. Therefore, in Figure 7, we grouped the models by the number of additional labeled data points used in the validation set for parameter tuning and also show the number of free parameters for each algorithm, as far as we were able to estimate from the corresponding papers. These numbers, of course, have to be taken with high caution, as not all parameters can be treated equally. For some tunable parameters, for example, a default value may already give good results most of the time, while others might have to be highly optimized for each new task. Thus, these numbers should be taken more as a rough index for model complexity. Regarding classification performance, the NeSi networks achieve competitive results, surpassing even deep belief networks (DBN-rNCA) and other recent approaches (like the Embed-networks, AGR, and AtlasRBF). In the light of reduced model complexity and effectively used labels, we can, furthermore, compare to the few very recent algorithms with a lower error rate (M1$+$M2, VAT, and the Ladder networks).

Figure 7:

Comparison of different algorithms on MNIST data with few labels. The top figure shows results for systems using 100, 600, 1000, and 3000 labeled data points for training. The algorithms are described in detail in the corresponding papers: $1$Salakhutdinov and Hinton (2007), $2$Liu et al. (2010), $3$Weston et al. (2012), $4$Pitelis et al. (2014), $5$Kingma et al. (2014), $6$Rasmus et al. (2015), $7$Miyato et al. (2016). We further compare to semisupervised k-nearest neighbors (kNN) as a baseline for simple discriminative clustering algorithms, which we optimized with the same tuning procedure and validation set as the NeSi networks. All algorithms except ours use 1000 or 10,000 additional data labels (from the training or test set) for parameter tuning. The bottom figure gives the number of tunable parameters (as estimated in Table 19 in appendix D) and, where known, learned parameters of the algorithms (note the different scales).

Figure 7:

Comparison of different algorithms on MNIST data with few labels. The top figure shows results for systems using 100, 600, 1000, and 3000 labeled data points for training. The algorithms are described in detail in the corresponding papers: $1$Salakhutdinov and Hinton (2007), $2$Liu et al. (2010), $3$Weston et al. (2012), $4$Pitelis et al. (2014), $5$Kingma et al. (2014), $6$Rasmus et al. (2015), $7$Miyato et al. (2016). We further compare to semisupervised k-nearest neighbors (kNN) as a baseline for simple discriminative clustering algorithms, which we optimized with the same tuning procedure and validation set as the NeSi networks. All algorithms except ours use 1000 or 10,000 additional data labels (from the training or test set) for parameter tuning. The bottom figure gives the number of tunable parameters (as estimated in Table 19 in appendix D) and, where known, learned parameters of the algorithms (note the different scales).

Figure 8:

Classification performance of different algorithms compared against varying proportion of labeled training data. The corresponding papers are listed in Figure 7. Three additional algorithms are shown only on the left-hand-side plot: MTC (Rifai, Dauphin, Vincent, Bengio, & Muller, 2011), ScatterCNN (Bruna & Mallat, 2013) and BayesCNN (Gal, Islam, & Ghahramani, 2017). For MTC and ScatterCNN, the size of the validation set is not reported. For BayesCNN, a validation set of 100 additional labels is reported for tuning of the weight decay. However the architecture of the used CNN and all remaining parameters were taken from a version already optimized on fully supervised MNIST with an unknown validation set size. The left-hand-side plot shows the achieved test errors with regard to the number of labeled data seen by the compared algorithms during training. The right-hand-side plot illustrates for the same experiments the total number of labeled data seen by each of the algorithms over the whole tuning and training procedure. For better readability, we show only the recurrent NeSi networks on the left-hand-side plot. Results for the feedforward networks can be directly transmitted from the right-hand side. The plots can be read similar to ROC curves, in the way that the more a curve approaches the upper-left corner, the better is the performance of a system for decreasing amounts of available labeled data.

Figure 8:

Classification performance of different algorithms compared against varying proportion of labeled training data. The corresponding papers are listed in Figure 7. Three additional algorithms are shown only on the left-hand-side plot: MTC (Rifai, Dauphin, Vincent, Bengio, & Muller, 2011), ScatterCNN (Bruna & Mallat, 2013) and BayesCNN (Gal, Islam, & Ghahramani, 2017). For MTC and ScatterCNN, the size of the validation set is not reported. For BayesCNN, a validation set of 100 additional labels is reported for tuning of the weight decay. However the architecture of the used CNN and all remaining parameters were taken from a version already optimized on fully supervised MNIST with an unknown validation set size. The left-hand-side plot shows the achieved test errors with regard to the number of labeled data seen by the compared algorithms during training. The right-hand-side plot illustrates for the same experiments the total number of labeled data seen by each of the algorithms over the whole tuning and training procedure. For better readability, we show only the recurrent NeSi networks on the left-hand-side plot. Results for the feedforward networks can be directly transmitted from the right-hand side. The plots can be read similar to ROC curves, in the way that the more a curve approaches the upper-left corner, the better is the performance of a system for decreasing amounts of available labeled data.

Figure 8 shows the performance of the models with respect to the number of labels used during training (left-hand side) and with respect to the total number of labels used for the complete tuning and training procedure (right-hand side). For the NeSi algorithms, these plots are identical, as we only use maximally as many labels in the tuning phase as in the training phase for the shown results of 100 labels and more. No other model has yet been shown to operate in the same regime as NeSi networks are able to. For all other algorithms, these plots can be regarded as the two extreme cases, where their actual performance in our chosen setting would probably lie somewhere in between (if no overfitting to the test set occurred).

One competing model that so far comes closest to our limit setting of as few labels as possible is an approach that combines 10 generative adversarial networks (GANs) (Salimans et al., 2016) with five layers each. With down to 20 labels for training (2 labels per class), the classification error of a single GAN was reported at $(16.77±4.52)%$, and for the full ensemble of 10 GANs, the error was reported to be $(11.34±4.45)%$. Comparison with the systems of Figure 8 is made difficult, however, as no information about the number of additional labels of a validation set is reported. If we assume that the ensemble of 10 GANs requires at least 100 additional labels for tuning (a conservative estimate considering at least 1000 labels of all other approaches; see Figure 7), we can compare the GANs to the performance of t-NeSi in the limit of few labels. The classification error of $(11.34±4.45)%$ achieved by 10 GANs tuned with presumably at least 100 labels and trained with 20 labels then compares to an error of $(6.21±0.38)%$ for t-NeSi, which was tuned with exactly 100 labels and also trained with 20 labels. In settings where more labels are available during training (again under the assumption of only few additional labels for parameter tuning) the GANs will surpass the NeSi networks again and perform comparably to the Ladder networks.

### 4.4  Large-Scale Handwriting Recognition (NIST SD19)

Modern algorithms, especially in the field of semisupervised learning, should be able to handle and benefit from the ever increasing numbers of available data (big data). A comparable task to MNIST, but with many more data points and much higher input dimensionality, is given by the NIST Special Database 19. It contains over 800,000 binary $128×128$ images from 3600 different writers (with around half of the data being handwritten digits and the other half being lower- and uppercase letters). We perform experiments on both digit recognition (10 classes) and case-sensitive letter recognition (52 classes).

We first applied the NeSi networks to the unpreprocessed NIST SD 19 digit data with $D=16,384$ input pixels. The data are of much higher dimensionality than MNIST, and the patterns are not centered by pixel mass, which represents a significantly more challenging task, as a lot more uninformative variation is kept within the data. Hence, having a mixture model, learning these variations would need many more hidden units to achieve similar performance. When keeping the same parameter setting as for MNIST (where we only increased $A$ to 25,000, giving $A/D≈1.5$, to account for the increased input dimensionality), the best performance for digit data in the fully labeled case was achieved by the r-NeSi network with an error rate of 9.5%.

For better performance and easier comparison, we preprocessed the data similar to MNIST (compare Cireşan, Meier, & Schmidhuber, 2012): for each image, we calculate square bounding boxes, resize to $20×20$, zero-pad to $28×28$, and center by pixel mass. Finally, we invert the image, such that patterns have high pixel values instead of the background as is the case for MNIST. For simplicity and because of its high similarity, we then use the same setting for our free model parameters as we used for MNIST without further retuning. The experiments are done using 1, 10, 60, 100, 300, or all labels per class. We allowed for the same number of iterations as for MNIST to give sufficient training time for convergence. However, with roughly five times more training data than for MNIST but the same total number of labels, we now have a five times lower average activation in the top layer until self-labeling starts. In the semisupervised settings, we therefore scale the learning rate of the top layer also by a factor of five compared to MNIST to $∊R=1×K/N$ for comparable convergence times. Figure 9 shows some examples of learned weights by the ff$+$-NeSi network with 10 labels per class. In Table 5, we report the mean and standard error over 10 experiments on both digit and letter data. For the NeSi networks, the results are given for the permutation-invariant task. To the best of our knowledge, this is the first system to report results for NIST SD19 in the semisupervised setting.

Figure 9:

Visualization of learned weights of ff$+$-NeSi when trained in the semisupervised setting for NIST handwritten letters data using 520 labels (10 per class).

Figure 9:

Visualization of learned weights of ff$+$-NeSi when trained in the semisupervised setting for NIST handwritten letters data using 520 labels (10 per class).

Table 5:
Test Error on NIST SD19 Data Set on the Task of Digit and Letter Recognition for Different Total Numbers of Labeled Data.
 Number of Labels/Class 1 2 10 60 100 300 Fully Labeled Digits (10 classes) #labels total 10 20 100 600 1000 3000 344,307 ff$+$-NeSi 7.56 $±$ 1.79 6.15 $±$ 0.14 6.20 $±$ 0.16 6.02 $±$ 0.08 6.02 $±$ 0.12 5.70 $±$ 0.03 5.11 $±$ 0.01 r$+$-NeSi 9.84 $±$ 2.40 8.50 $±$ 2.09 6.14 $±$ 0.23 5.83 $±$ 0.14 5.94 $±$ 0.12 5.72 $±$ 0.10 4.52 $±$ 0.01 t-NeSi 5.71$±$ 0.42 5.23$±$ 0.15 5.26$±$ 0.23 4.84$±$ 0.02 4.86$±$ 0.03 4.83$±$ 0.02 4.50 $±$ 0.01 35c-MCDNN 0.77 Letters (52 classes) #labels total 52 104 520 3120 5200 15,600 387,361 ff$+$-NeSi 55.70 $±$ 0.62 51.32 $±$ 0.79 46.22 $±$ 0.43 44.24 $±$ 0.23 43.69 $±$ 0.21 42.96 $±$ 0.28 34.66 $±$ 0.05 r$+$-NeSi 64.97 $±$ 0.85 60.32 $±$ 0.91 54.08 $±$ 0.38 43.73 $±$ 0.15 41.57$±$ 0.13 37.95$±$ 0.12 31.93 $±$ 0.06 t-NeSi 52.14$±$ 1.07 48.46$±$ 0.92 45.62$±$ 0.43 41.87$±$ 0.32 41.75 $±$ 0.36 41.13 $±$ 0.30 33.34 $±$ 0.04 35c-MCDNN 21.01
 Number of Labels/Class 1 2 10 60 100 300 Fully Labeled Digits (10 classes) #labels total 10 20 100 600 1000 3000 344,307 ff$+$-NeSi 7.56 $±$ 1.79 6.15 $±$ 0.14 6.20 $±$ 0.16 6.02 $±$ 0.08 6.02 $±$ 0.12 5.70 $±$ 0.03 5.11 $±$ 0.01 r$+$-NeSi 9.84 $±$ 2.40 8.50 $±$ 2.09 6.14 $±$ 0.23 5.83 $±$ 0.14 5.94 $±$ 0.12 5.72 $±$ 0.10 4.52 $±$ 0.01 t-NeSi 5.71$±$ 0.42 5.23$±$ 0.15 5.26$±$ 0.23 4.84$±$ 0.02 4.86$±$ 0.03 4.83$±$ 0.02 4.50 $±$ 0.01 35c-MCDNN 0.77 Letters (52 classes) #labels total 52 104 520 3120 5200 15,600 387,361 ff$+$-NeSi 55.70 $±$ 0.62 51.32 $±$ 0.79 46.22 $±$ 0.43 44.24 $±$ 0.23 43.69 $±$ 0.21 42.96 $±$ 0.28 34.66 $±$ 0.05 r$+$-NeSi 64.97 $±$ 0.85 60.32 $±$ 0.91 54.08 $±$ 0.38 43.73 $±$ 0.15 41.57$±$ 0.13 37.95$±$ 0.12 31.93 $±$ 0.06 t-NeSi 52.14$±$ 1.07 48.46$±$ 0.92 45.62$±$ 0.43 41.87$±$ 0.32 41.75 $±$ 0.36 41.13 $±$ 0.30 33.34 $±$ 0.04 35c-MCDNN 21.01

Notes: The results for NeSi are permutation invariant and given as the mean and standard error (SEM) over 10 independent repetitions, with randomly drawn, class-balanced labels. Free-parameter values as were used for MNIST. Numbers in bold are the best performing (in terms of lowest mean error) of the compared systems for each label setting.

As for MNIST, the performance of our three-layer network is in the fully labeled setting not competitive with state-of-the-art fully supervised algorithms (like the 35c-MCDNN, a committee of 35 deep convolutional neural networks; Cireşan et al., 2012). Note, however, that our results do apply to the permutation-invariant setting and do not take prior knowledge about two-dimensional image data into account (as convolutional networks do). More important, for the settings with few labels, we see only a relatively mild decrease in test error when we strongly decrease the total number of used labels. Even for just 10 labels per class, most patterns are correctly classified for the challenging task of case-sensitive letter classification (chance is below 2%). Comparison of the digit classification setting with MNIST furthermore suggests that not the relative but the absolute number of labels per class is important for learning in our networks (compare, for example, Rasmus et al., 2015, note 4).

In general, digit classification with NIST SD19 seems to be a more challenging task than MNIST (which can also be observed in the results of Cireşan et al., 2012). However, the test error in our case increased more slowly than for MNIST with decreasing numbers of labels—and, in the extreme case, a single label per class even surpassed the MNIST results. When using, as for MNIST, only 60,000 training examples for NIST, the test error for the single-label setting on digit data increased from $(7.56±1.79)%$ to $(9.10±0.92)%$ for ff$+$-NeSi, showing nicely the benefit of additional unlabeled data points for learning in NeSi networks. In fact, rare outliers are the main reason for the increase in test error for the single-label case, where two or more classes were learned completely switched, for example, all 3's were learned as 8's and vice versa. This can happen when the single randomly chosen labeled data points of two similar classes are too ambiguous and therefore lie close together at the border between two clusters. Additional unlabeled data points lead to better-defined clusters, where this problem occurs less frequently. Since in the recurrent network, the label information is also fed back to the middle layer, this network is more sensible to label information. On one hand, this helps when more label information is known. On the other hand, this also more often results in a stronger accumulation of errors in the self-labeling procedure as wrong labels are less frequently corrected. The best result in this setting of very few labels is again the truncated feedforward network; as with better-defined clusters in the middle layer, the problem of class confusion also becomes less frequent (also compare detailed results in appendix C).

With more training data available than for MNIST, we also tried out bigger networks of 20,000 hidden units for digit data, but saw only slight improvements on the test error. This points to a limit of learnable subclasses (a.k.a. writing styles) within the data, where the modeling of more than $C=10,000$ subclasses improves performance very little, but the increased data in NIST help to better define those given subclasses.

## 5  Discussion

In this study, we explored classifier training on data sets with few labels. We put a special emphasis on adhering to this restriction not only for the training phase but also the complete tuning, training, and testing procedure. Our tool was a novel neural network with learning rules based on a maximum likelihood objective. Starting from hierarchical Poisson mixtures, the derived three-layer directed data model can be observed to take on a form similar to learning in standard DNNs. The parameters of the network can be optimized with a very limited number of labels and training in the same setting showed to achieve competitive results, resulting in the first network shown to operate using no more than 10 labels per class in total on the investigated data sets.

### 5.1  Relation to Standard and Recent Deep Learning

Neural simpletrons are, on one hand, similar to standard DNNs as they learn online (i.e., they learn per data point or per mini-batch), are efficiently scalable, and as their activation and learning rules are local and consist of very elementary mathematical expressions (see Table 1). On the other hand, the NeSi networks exhibit features that are a hallmark of deep directed generative models, such as learning from unlabeled data and integration of bottom-up and top-down information for optimal inference. By comparing the learning and neural interaction equations of DNNs and the NeSi networks directly, equation T1.5 for top-down integration and the learning rules, equations T1.7 and T1.8, and T1.8 represent the crucial differences. The first one allows the NeSi networks (the r- and r$+$-NeSi versions) to integrate top-down and bottom-up information for inference, which contrasts with pure feedforward processing in standard DNNs. The second one shows that NeSi learning is local and neurally plausible (Hebbian) while approximating likelihood optimization, which differs from less local backpropagation for discriminative learning in standard DNNs. In the example of the NeSi networks, recurrent bottom-up/top-down integration was especially useful when many labels were available, particularly in the complete setting (see section B.5) or for the task of case-sensitive letter recognition (see Table 5), which represented one of the largest-scale applications considered here. When we acquire additional inferred labels through self-labeling, the (truncated) feedforward system was best in maintaining a low test error even down to the limit of a single training label per class. For fully labeled data, the NeSi systems were not observed to be competitive (e.g., for MNIST). Discriminative approaches dominate in this regime, as it seems to be difficult to compete with discriminative learning with such a minimalistic system once sufficiently many labeled data points are available. Furthermore, the generative NeSi approach relies on the possibility to learn representations of meaningful templates for Poisson-like distributed data (as shown, for example, in Figures 11, 5, and 9); and template representations make the networks very interpretable. However, for example, for large image databases showing 2D images of 3D objects, learning of such templates based on pixel intensities seems very challenging. Approaches applied to such data therefore commonly use (handcrafted or learned) features that typically transform images into suitable metric spaces. Such data would motivate studies of networks similar to ours but assuming gaussian observables. Similar such approaches have shown competitive performances (Van den Oord & Schrauwen, 2014). Alternatively, 2D images of 3D objects may be transformed into nonnegative data spaces to make them suitable for our model with Poisson observables. Any such approach would require the introduction of an additional feature layer, however, with potentially additional free parameters.

Besides the approaches studied here, many other systems are able to make use of top-down and bottom-up integration for learning and inference. Top-down information is provided in an indirect way if a system introduces new labels itself by using its own inference mechanism. Similar to the ff$+$- and r$+$-NeSi networks, this self-labeling idea has been followed repeatedly previously (for a recent overview, see Triguero et al., 2015). For the NeSi systems, such feedback worked especially well, which may indicate that self-labeling is particularly well suited for deep directed models in general. Systems that make a more direct use of bottom-up and top-down information include approaches based on undirected graphical models. The most prominent examples, especially in the context of deep learning, are deep restricted Boltzmann machines (RBMs). While RBMs are successfully used in many contexts (e.g., Hinton et al., 2006; Goodfellow, Courville, & Bengio, 2013; Neftci, Pedroni, Joshi, Al-Shedivat, & Cauwenberghs, 2015), performance of RBMs alone, without additional learning methods, does not seem to be competitive with recent results on semisupervised learning. The best-performing RBM-related systems we compared to here are the HDRBM (Larochelle & Bengio, 2008) for 20 Newsgroups and the DBN-rNCA system (Salakhutdinov & Hinton, 2007) for MNIST. Both approaches use additional mechanisms for semisupervised classification, which can be taken as evidence for standard RBM approaches being more limited when labeled data are sparse. In this semisupervised setting, both ff-NeSi and r-NeSi perform better than the DBN-rNCA approach for MNIST (see Figures 7 and 8) and better than the HDRBM for 20 Newsgroups (see Table 3). When optimized for the fully labeled setting, NeSi even improves considerably to the HDRBM in the fully labeled 20 Newsgroups task. Recent RBM versions, enhanced and combined with discriminative deep networks (Goodfellow et al., 2013), outperform NeSi networks on fully labeled MNIST; however, the competitiveness of such approaches in semisupervised settings has not been shown so far.

To reduce network complexity and improve classification performance, we showed results with a newly introduced selection criterion for network truncation (Forster & Lücke, 2017). Reducing the number of active neurons in a network for enhanced performance is also common for standard deep networks and became popular with approaches like “dropout” (Srivastava, Hinton, Krizhevsky, Sutskever, & Salakhutdinov, 2014). However, the truncation of hidden variables used here is notably different as it uses a systematic data-driven selection of very few neurons (here, only $0.15%$) to maximize the free energy, whereas dropout typically uses a random selection of half of the neurons to reduce coadaptation.

Regarding the learning and inference equations themselves, the compactness of the equations defining the NeSi algorithms and their formulation as minimalistic neural networks represent a major difference to pure generative approaches (such as Saul et al., 1996; Larochelle & Murray, 2011; Gan et al., 2015) or combinations of DNNs and graphical models (e.g., Kingma et al., 2014). Regarding empirical comparisons, typical directed generative models are not compared on typical DNN tasks but use other evaluation criteria. Prominent or recent examples such as deep sigmoid belief networks (SBNs; see, for example, Saul et al., 1996; Gan et al., 2015) have, for instance, not been shown to be competitive with standard discriminative deep networks on semisupervised classification tasks so far. In general, a main challenge is the need to introduce approximation schemes. The accuracy of approximations for large networks, and the complexity of the networks themselves, still seem to prevent scalability or competitive performance on tasks as discussed here. In principle, however, deep directed generative models such as deep SBNs or other deep directed multiple-cause approaches are more expressive than deep mixture models. One may thus also interpret our results as highlighting the general potential of deep directed generative models for tasks such as classification.

### 5.2  Empirical Performance, Model Complexity, and Data with Few Labels

Our main empirical results for the NeSi systems were obtained using the 20 Newsgroups, the MNIST, and the NIST SD19 data sets (with MNIST simply being the data set for which most empirical data for semisupervised learning are available). Tables 3 to 5 and Figures 7 and 8 summarize the results and provide comparison to those of other approaches. The r-NeSi system is the best-performing system for the semisupervised 20 Newsgroups data set (see Table 3), but the data set is much more popular as a fully supervised benchmark (comparison only to HDRBM in the semisupervised setting). The semi-supervised MNIST benchmark is therefore more instructive for comparison.

Considering Figure 8 (right-hand side), the NeSi algorithms still perform well for a budget of 1000, 600, or just 100 labels. For 600 labels, t-NeSi has a classification error well below $4%$, and all NeSi approaches with self-labeling have classification errors below $5%$ down to 100 labels. So far, it has not been shown that classifiers can be trained with similarly low numbers of labels. The reason is that all comparable approaches use at least 1000 labels to optimize the free parameters of their respective systems. If these additional labels are not considered, the NeSi approaches t-NeSi and r$+$-NeSi are outperformed by three recent systems in the limit of few labels: M1$+$M2 (Kingma et al., 2014), VAT (Miyato et al., 2016), and the Ladder network (Rasmus et al., 2015). All three systems use a combination of different approaches. M1$+$M2 (Kingma et al., 2014) combines generative learning and discriminative backpropagation learning: The results for the VAT (Miyato et al., 2016) are obtained by combining a DNN using backpropagation with a smoothness constraint derived from the data distribution: And the ladder network (Rasmus et al., 2015) applies a per-layer denoising objective onto standard discriminative learning models like MLPs and CNNs. The many free parameters of M1$+$M2, VAT, and ladder networks seem to require relatively large labeled validation sets (see Figure 7). M1$+$M2 and ladder networks used 10,000 additional labels for the tuning of these parameters, while VAT used 1000 additional labels. It can be argued that free parameters could be tuned in other ways (e.g., using other, related, data sets). However, it remains to be shown how well any of the recently suggested approaches would perform in such a case. The use of up to 10,000 labels may indicate that large labeled validation sets for some approaches are important to obtain high performance. Other recent work uses ensembles of generative adversarial networks (GANs; Salimans et al., 2016). As discussed in section 4.2, comparison to the GAN approach is made difficult because the labels required for the tuning of free parameters are not reported. If we only consider labels for training, the NeSi networks are the first to report results in the absolute limit case of only a single label per class on the MNIST data set. In this limit, the t-NeSi approach achieves lower classification errors than the best results reported by Salimans et al. (2016): an error of $(7.22±0.53)%$ for t-NeSi trained with one label per class or of $(6.21±0.38)%$ when trained with two labels per class, compared to $(11.34±4.45)%$ for an ensemble of 10 GANs trained with two labels per class. How many labels the GAN ensembles require in total (for training and tuning) remains unknown.

Another way to view the comparisons in Figures 7 and 8 is to interpret the results as highlighting a performance versus model complexity trade-off. If we consider the learning and tuning protocols that were used for the different systems to achieve the reported performance, large differences in the number of tunable parameters, the size of validation sets, and the complexity of the systems can be noticed. While some systems only need to tune a few parameters, others (especially hybrid systems) require tuning of quite many (see Figure 7). Parameter tuning can be considered as a second optimization loop requiring labels in addition to those of the training set. It may be argued that not considering these additional labels can favor large systems with many tunable parameters, as would be the case in most cases when parameterized models are fitted to data. To (partly) normalize for model complexity, performance comparison with regard to the total number of required labels could therefore serve as a kind of empirical Occam's razor. If the total number of labels is considered in the case of MNIST, the comparison of system performances changes as illustrated in Figure 8. Considering the right-hand side of Figure 8, the VAT system (1000 additional labels) could, for instance, be considered to perform more strongly than the ladder network. However, while the numbers of tunable parameters for the different systems and the sizes of the used validation sets are clearly correlated (see Figure 7), it remains unclear how many additional labels would be required by the different systems. The two plots of Figure 8 could therefore be considered as two limit cases for comparison.

Regarding the comparison of the networks themselves, M1$+$M2 is the approach most similar to NeSi networks as both approaches use generative models as integral parts. Both approaches can also be taken as evidence for two hidden layers of generative latents already resulting in competitive performances. A difference is, however, the strong reliance of M1$+$M2 on deep neural networks to parameterize the dependencies between observed and hidden variables and dependencies among hidden variables that are optimized using DNN gradient approaches (the same applies for DNNs used for the applied variational approximation). Inference and learning in M1$+$M2 is therefore significantly more intricate and requires multiple deep networks. Also the generative description part itself is very different (e.g., motivated by easy differentiability based on continuous latents) and is in M1$+$M2 not directly used for inference. For neural simpletrons, the generative and the neural network weights are identical and are directly used for inference.

Compared to the hybrid M1$+$M2, VAT, and ladder networks, the NeSi networks studied here are nonhybrid networks. As also AGR, Atlas-RBF, and EmbeddCNN are hybrids (Liu et al., 2010; Pitelis et al., 2014; Weston et al., 2012), the NeSi networks can be considered as the best-performing nonhybrid approaches even if we do not use self-labeling and truncated training and even if we only consider exclusively the labels for training (see Figure 8, left-hand side). With self-labeling and truncation mechanisms, r$+$-NeSi, ff$+$-NeSi, and t-NeSi are able to achieve even better performance, especially in the limit of few labels as investigated here. Self-labeling and truncation introduce one additional free parameter each, but these parameters are both tunable on the same minimal validation set like the other free parameters of the NeSi networks. Furthermore, both additional mechanisms are obtained from the same single central likelihood objective, equation 2.5, the NeSi networks were derived from.

Finally, using the NIST SD19 data set, we demonstrate the applicability of the approach to data sets with more data (up to 800,000 data points), larger input dimensionality (up to 16,384 input pixels), and more classes (up to 52 classes for case-sensitive letter recognition). NIST SD19 is known to be much more challenging than MNIST (compare, e.g., Cireşan et al., 2012). The NeSi approaches scale to all of the much larger settings investigated in Table 5, and the results show that good classification performance can be maintained with few labels. The networks can successfully leverage the larger number of unlabeled data—for example, for the setting of one training label per class (10-class NIST digit classification). For the challenging 52-class NIST setting, t-NeSi maintains above $50%$ correct classification for down to 10 labels per class in total. The NeSi classification in this setting remains fully interpretable with subclasses shown in Figure 9. In general, we have provided here the first results for semisupervised learning for NIST SD19.

### 5.3  Future Work and Outlook

As the NeSi networks share many properties with standard deep neural networks, further enhancements such as network pruning, annealing, or dropout could be investigated to further increase performance or efficiency. Also, settings with additional information in the form of a correct or false classification feedback instead of labels represent an interesting future research direction. For example, Holca-Lamarre, Lücke, and Obermayer (2017) have, in a study with neuroscientific focus, shown that classification performance can improve using a global reinforcement signal. Further reduction of label dependency could be achieved by using active learning (Cohn, Ghahramani, & Jordan, 1996) in order to systematically select required labels based on the uncertainty in posterior distributions. By choosing better than random labels to learn from, this could further reduce the number of needed labels to achieve similar or better performances, as shown, for example, with the BayesianCNN (see Figure 8, left-hand side) by Gal et al. (2017). Such an active learning approach could complement the self-labeling used here. A user-provided label could, for example, be given for low-decision certainties, while a self-provided label could be used for high-decision certainties. Any new technique for improved learning may make the algorithms more complex and may introduce new free parameters, however. For the goal studied here, such future systems will have to maintain the ability to be tunable and trainable with as few labels as possible. The same would apply to any future versions of our network with more than three layers or different layer variants.

The development of further variants of simpletron layers would allow for higher flexibility to construct simpletron networks that are most suitable for the given data. Gaussian mixture models would be an interesting and promising candidate for metric data in a Euclidean space. And incorporation of prior knowledge about spatial relations in generative convolutional variants (e.g., Dai et al., 2013; Gal & Ghahramani, 2016; Patel et al., 2016) would be more suitable for image data. However, derivations of such layers and guarantees on the (approximate) equivalence between EM and neural fixed points similar to section 3.1 are not necessarily possible or as straightforward as for the hierarchical Poisson mixture model and require careful further research.

Also, the combination with discriminative learning approaches is a promising extension. Ideally, such a combination would maintain a monolithic architecture and limited complexity. Other studies have already shown that deep discriminative models can be related to directed generative models in grounded mathematical ways (see Patel et al., 2016, for a recent example). Similarly, discriminative counterparts may be derivable for the NeSi systems.

Still further potential research directions are combinations with hyperparameter optimization approaches (e.g., Thornton et al., 2013; Bergstra et al., 2013; Hutter et al., 2015) in order to increase autonomy and to further exploit the very low number of free parameters. Finally, the probabilistic nature of the NeSi networks would allow for addressing problems such as label noise in straightforward ways, while its generative model relation would allow for the investigation of tasks other than classification.

## Notes

1

This is sometimes referred to as one-hot coding.

2

We use a Python 2.7 implementation of the NeSi algorithms, which is optimized using Theano to execute on NVIDIA TITAN X and Tesla GPUs. Details are in section B.1. The source code and scripts for repeating the experiments can be found at https://github.com/dennisforster/NeSi.

## Appendix A:  Derivation Details

Although the resulting NeSi neural network models exist as a very compact and simple set of equations, shown in Table 1, the derivation of these equations is not necessarily trivial. Therefore, here we give further insight into some derivation steps to allow for a better understanding of the model at hand. In section A.1, we give details on the derivation of the EM update rules for the underlying generative model. In section A.2, we show the necessary derivation steps to attain the approximate equivalence of neural online learning with EM batch learning at convergence, which is the basis of our neural network derivation.

### A.1  EM Update Steps

#### A.1.1  E-Step

The posterior $p(k|c,l,Θ)$ can be easily obtained by simply applying Bayes' rule for the labeled and unlabeled case. For $p(c|y→,l,Θ)$ however, some additional steps are necessary to attain the compact form shown in equation 2.11.

We start again with Bayes' rule and use the sum and product rule of probability to regain the conditionals, equations 2.1 to 2.3, of the generative model:
$p(c|y→,l,Θ)=p(y→|c,W)∑kp(c|k,R)p(k|l)∑c'p(y→|c',W)∑kp(c'|k,R)p(k|l).$
(A.1)
When we now insert the corresponding distributions, equations 2.2 and 2.3, into equation A.1, the benefit of assuming Poisson noise for $p(y→|c,W)$ becomes apparent. First, the factorial given by the $Γ$-function directly drops out. Second, by using the weight constraint, equation 2.3, the product of exponentials $∏de-Wcd=e-∑dWcd=e-A$ also cancels with the denominator:
$…=∏d(Wcd)ydΓ(yd(n)+1)-1e-Wcd∑kRkcuk∑c'∏d(Wc'd)ydΓ(yd+1)-1e-Wc'd∑kRkc'uk=∏d(Wcd)yd∑kRkcuk∑c'∏d(Wc'd)yd∑kRkc'uk,with$
(A.2)
$uk=p(k|l)=δklforlabeleddatap(k)=1Kforunlabeleddata.$
(A.3)
Here, we used $uk$ as a shorthand notation to directly cover both the labeled and unlabeled case. We can now rewrite this result as softmax function with weighted sums over bottom-up and top-down inputs $yd$ and $uk$ as its argument:
$…=exp∑dydlog(Wcd)+log(∑kRkcuk)∑c'exp∑dydlog(Wc'd)+log(∑kRkc'uk)=exp(Ic)∑c'exp(Ic'),with$
(A.4)
$Ic=∑dlog(Wcd)yd+log∑kukRkc.$
(A.5)

#### A.1.2  M-Step

To maximize the free energy with respect to parameters $Wcd$ and $Rkc$, we use the method of Lagrange multipliers for constrained optimization:
$∂F∂Wcd+∂∂Wcd∑c'λc'∑d'Wc'd'-A=!0,$
(A.6)
$∂F∂Rkc+∂∂Rkc∑k'λk'∑c'Rk'c'-1=!0.$
(A.7)
Starting with the first term of equation A.6 for $Wcd$, we insert the free energy, equation 2.8, and evaluate the partial derivative:
$∂∂WcdF(Θold,Θ)=∑n,c',kp(c',k|y→(n),l(n),Θold)∑d'∂∂Wcdyd'(n)logWc'd'-Wc'd'=∑np(c|y→(n),l(n),Θold)yd(n)1Wcd-1.$
(A.8)
The second term of equation A.6, incorporating the Lagrange multipliers, results in
$∂∂Wcd∑c'λc'∑d'Wc'd'-A=∑c'λc'∑d'δcc'δdd'=λc.$
(A.9)
Both terms put back into equation A.6 and multiplied by $Wcd$ yields
$∑np(c|y→(n),l(n),Θold)yd(n)-Wcd+λcWcd=!0.$
(A.10)
To evaluate the Lagrange multipliers $λc$, we make use of the constraint equation 2.3 by taking the sum over $d$:
$↠∑np(c|y→(n),l(n),Θold)∑dyd(n)-∑dWcd+λc∑dWcd=0↠λc=∑np(c|y→(n),l(n),Θold)-1A∑np(c|y→(n),l(n),Θold)∑dyd(n).$
(A.11)
Inserting $λc$ back into equation 4.10 and canceling opposing terms finally yields the update rule for $Wcd$:
$∑np(c|y→(n),l(n),Θold)yd(n)-Wcd1A∑np(c|y→(n),l(n),Θold)∑d'yd'(n)=!0$
(A.12)
$↠Wcd=A∑np(c|y→(n),l(n),Θold)yd(n)∑d'∑np(c|y→(n),l(n),Θold)yd'(n).$
(A.13)
The derivation of $Rkc$ updates follows the same procedure. Evaluation of the two terms in equation A.7 and multiplication with $Rkc$ gives
$∑np(c,k|y→(n),l(n),Θold)+λkRkc=!0.$
(A.14)
Using the constraint equation 2.2 for $Rkc$, the Lagrange multipliers evaluate to
$λk=-∑n,cp(c,k|y→(n),l(n),Θold).$
(A.15)
Inserting these back into equation A.14, we arrive at the update rule for $Rkc$:
$↠Rkc=∑np(k|c,l(n),Θold)p(c|y→(n),l(n),Θold)∑c'∑np(k|c',l(n),Θold)p(c'|y→(n),l(n),Θold).$
(A.16)

### A.2  Approximate Equivalence of Neural Online Learning at Convergence

In more detail, to derive equations 3.4, we first consider the dynamic behavior of the summed weights $W‾c=∑dWcd$ and $R‾k=∑cRkc$. By taking sums over $d$ and $c$ for equations 3.1 and 3.2 respectively, we obtain
$ΔW‾c=∊Wsc(A-W‾c),ΔR‾k=∊Rtk(B-R‾k).$
(A.17)
As we assume $sc,tk≥0$, we find that for small learning rates $∊W,∊R$, the states $W‾c=A$ and $R‾k=B$ are stable (and the only) fixed points of the dynamics for $W‾c$ and $R‾k$. This applies for all $k$ and $c$ and for any $sc$ and $tk$ that are nonnegative and continuous with regard to their arguments.
The above result uses an approach developed by Keck et al. (2012), which we apply here to a hierarchical system with two hidden layers instead of one and by considering label information. By assuming normalized weights based on equations A.17, we can approximate the effect of iteratively applying equations 3.1 and 3.2, as
$Wcd(n+1)=AWcd(n)+∊Wsc(y→(n),u→(n),Θ(n))yd(n)∑d'Wcd'(n)+∊Wsc(y→(n),u→(n),Θ(n))yd'(n)$
(A.18)
and
$Rkc(n+1)=BRkc(n)+∊Rtk(s→(n),u→(n),R(n))sc(y→(n),u→(n),Θ(n))∑c'Rkc'(n)+∊Rtk(s→(n),u→(n),R(n))sc'(y→(n),u→(n),Θ(n)),$
(A.19)
where $W(n)$ and $R(n)$ denote the weights at the $n$th iteration of learning, where $Θ(n)=(W(n),R(n))$ and where $s→(n)=s→(y→(n),u→(n),Θ(n))$ to abbreviate notation. Both equations can be further simplified. Using the abbreviations $Fcd(n)=sc(y→(n),u→(n),Θ(n))yd(n)$ and $Gkc(n)=tk(s→(n),u→(n),R(n))sc(y→(n),u→(n),Θ(n))$, we first rewrite equations A.18 and A.19 as
$Wcd(n+1)=AWcd(n)+∊WFcd(n)∑d'Wcd'(n)+∊WFcd'(n)andRkc(n+1)=BRkc(n)+∊RGkc(n)∑d'Rkc'(n)+∊RGkc'(n).$
(A.20)
Let us suppose that learning has converged after about $T$ iterations. If we now add another $N$ iteration and repeatedly apply the learning steps, closed-form expressions for the weights $Wcd(T+N)$ and $Rkc(T+N)$ are given by
$Wcd(T+N)=Wcd(T)+∊W∑n=1NFcd(T+N-n)∏n'=n+1N(1+∊WA∑d'Fcd'(T+N-n'))∏n'=1N(1+∊WA∑d'Fcd'(T+N-n'))$
(A.21)
and
$Rkc(T+N)=Rkc(T)+∊R∑n=1NGkc(T+N-n)∏n'=n+1N(1+∊RB∑c'Gkc'(T+N-n'))∏n'=1N(1+∊RB∑c'Gkc'(T+N-n')).$
(A.22)

The large products in numerator and denominator of equations A.21 and A.22 can be regarded as polynomials of order $N$ for $∊W$ and $∊R$, respectively. Even for small $∊W$ and $∊R$, it is difficult, however, to argue that higher-order terms of $∊W$ and $∊R$ can be neglected because of the combinatorial growth of prefactors given by the large products.

We therefore consider the approximations derived for the nonhierarchical model in Keck et al. (2012), which were applied to an equation of the same structure as equations A.21 and A.22. At closer inspection of the terms $Fcd(T+N-n)$ and $Gkc(T+N-n)$, we find that we can apply these approximations also for the hierarchical case. For completeness, we reiterate the main intermediate steps of these approximations below.

Taking equation A.21 as an example, we simplify its right-hand side. The approximations are all assuming a small but finite learning rate $∊W$ and a large number of inputs $N$. Equation A.21 is then approximated by
$Wcd(T+N)≈Wcd(T)+∊W∑n=1Nexp∊WA(N-n)∑d'F^cd'(n)Fcd(T+N-n)exp∊WAN∑d'F^cd'(0)$
(A.23)
$≈exp-∊WAN∑d'F^cd'(0)Wcd(T)+∊WF^cd(0)∑n=1Nexp-∊WAn∑d'F^cd'(0)$
(A.24)
$≈F^cd(0)∊Wexp-∊WA∑d'F^cd'(0)1-exp-∊WA∑d'F^cd'(0)=AF^cd(0)∑d'F^cd'(0)=A∑n=1NFcd(T+N-n)∑d'∑n=1NFcd'(T+N-n),$
(A.25)
where $F^cd(n)=1N-n∑n'=n+1NFcd(T+N-n')$ (note that $F^cd(0)$ is the mean of $Fcd(n)$ over $N$ iterations starting at iteration $T$).
For the first step, equation A.23, we rewrote the products in equation A.21 and used a Taylor expansion (for details, see the supplement of Keck et al., 2012):
$∏n'=n+1N(1+∊WA∑d'Fcd'(T+N-n'))≈exp∊WA(N-n)∑d'F^cd'(n).$
(A.26)

For the second step, equation A.24, we approximated the sum over $n$ in equation A.23 by observing that the terms with large $n$ are negligible and by approximating sums of $Fcd(T+N-n)$ over $n$ by the mean $F^cd(0)$. For the last steps, equation A.25, we used the geometric series and approximated for large $N$ (for details on these last two approximations, see the supplement of Keck et al., 2012). Furthermore, we used the fact that for small $∊W$, $∊Wexp(-∊WB)1-exp(-∊WB)≈B-1$ (which can be seen, for example, by applying l'Hôpital's rule).

By inserting the definition of $Fcd(n)$ into equation A.25, we finally find:
$Wcd(T+N)≈A∑n=1Nsc(y→(n),u→(n),Θ(T+n))yd(n)∑d'∑n=1Nsc(y→(n),u→(n),Θ(T+n))yd'(n).$
(A.27)
Analogously, we find for $Rkc$,
$Rkc(T+N)≈B∑n=1Ntk(s→(n),u→(n),R(T+n))sc(n)∑c'∑n=1Ntk(s→(n),u→(n),R(T+n))sc'(n),$
(A.28)
where we again used $s→(n)=s→(y→(n),u→(n),Θ(T+n))$ for better readability in the last equation. If we now assume convergence, we can replace $Wcd(T+N)$ and $Wcd(T+n)$ by $Wcd$ and $Rkc(T+N)$ and $Rkc(T+n)$ by $Rkc$ to recover equations 3.4 in section 3 with converged weights $Wcd$ and $Rkc$.

Note that each approximation is individually very accurate for small $∊W$ and large $N$. Equations 3.4 can thus be expected to be satisfied with high accuracy in this case and numerical experiments based on comparisons with EM batch-mode learning verified such high precision.

## Appendix B:  Computational Details

### B.1  Parallelization on GPUs and CPUs

The online update rules of the neural network Table 1 are ideally suited for parallelization using GPUs, as they break down to elementary vector or matrix multiplications. We observed GPU executions with Theano to result in training time speed-ups of over two orders of magnitude compared to single-CPU execution (NVIDIA GeForce GTX TITAN Black GPUs versus AMD Opteron 6134 CPUs).

Furthermore, we can use the concept of mini-batch training for CPU parallelization or to optimize GPU memory usage. There, the learning effect of a small number $ν$ of consecutive updates in equations 3.1 and 3.2 is approximated by one parallelized update over $ν$ independent updates:
$ΔνWcd(n):=∊W∑i=0ν-1sc(n+i)yd(n+i)-sc(n+i)Wcd(n),Wcd(n+ν)≈Wcd(n)+ΔνWcd(n),$
(B.1)
$ΔνRkc(n):=∊R∑i=0ν-1tk(n+i)sc(n+i)-tk(n+i)Rkc(n),Rkc(n+ν)≈Rkc(n)+ΔνRkc(n).$
(B.2)

The maximal aberration from single-step updates caused by this approximation can be shown to be of $O((∊ν)2)$. Since this effect is negligible for $∊ν≪1$, as we also experimentally confirmed, we consider only the mini-batch size $ν$ as a parallelization parameter, and not as free parameter that could be chosen to optimize training in anything else than training speed.

### B.2  Weight Initialization

For the complete setting ($C=K$), where there is a good amount of labeled data per hidden unit even when labeled data are sparse and the risk of running into early local optima where the classes are not well separated is high, we initialize the weights of the first hidden layer in a modified version of Keck et al. (2012): We compute the mean $mkd$ and standard deviation $σkd$ of the labeled training data for each class $k$ and set $Wkd=mkd+U(0,2σkd)$, where $U(xdn,xup)$ denotes the uniform distribution in the range $(xdn,xup)$.

For the overcomplete setting ($C>K$), where there are far fewer labeled data points than hidden units in the semisupervised setting and class separation is no imminent problem, we initialize the weights using all data disregarding the label information. With the mean $md$ and standard deviation $σd$ over all training data points, we set $Wcd=md+U(0,2σd)$.

The weights of the second hidden layer are initialized as $Rkc=1/C$. The only exceptions to this rule are the additional experiments on the 20 Newsgroups data set in section B.5 for the fully labeled setting. As noted in the text, in this setting, we were able to make better use of the recurrent connections of the r-NeSi network and the fully labeled data set by initializing the weights of the second hidden layer as $Rkc=δkc$.

### B.3  A Likelihood Criterion for Early Stopping

Training of the first layer in the feedforward network is not influenced by the state of the second layer and is therefore independent of the number of provided labels. This is no longer the case for the recurrent network (r-NeSi). A low number of labels can lead to overfitting effects in r-NeSi when the number of hidden units in the first hidden layer is substantially larger than the number of labeled data points. However, when using the inferred labels for training in the r$+$-NeSi network, such overfitting effects vanish again.

Since learning in our network corresponds to maximum likelihood learning in a hierarchical generative model, a natural measure to define a criterion for early stopping of r-NeSi can be based on monitoring of the log likelihood, which is given by equation 2.5 (replacing the generative weights $(W,R)$ by the weights $(W,R)$ of the network). As soon as the scarce labeled data start overfitting the first-layer units as a result of top-down influence in $Ic$ (compare equation T1.5), the log likelihood computed over the whole training data is observed to decrease. This declining event in data likelihood can be used as a stopping criterion to avoid overfitting without requiring additional labels.

Figure 10 shows an example of the evolution of the average log likelihood per data point during training compared to the test error. For experiments over a variety of network sizes, we found strong negative correlations of $〈PPMCC〉=-0.85±0.1$. To smooth out random fluctuations in the likelihood, we compute the centered moving average over 20 iterations and stop as soon as this value drops below its maximum value by more than the centered moving standard deviation. The test error in Figure 10 is computed only for illustration purposes. In our experiments, we solely used the moving average of the likelihood to detect the drop event and stop learning. In our control experiments on MNIST, we found that the best test error generally occurred some iterations after the peak in the likelihood (compare Figure 10), which we, however, for simplicity have not exploited for our reported results.

Figure 10:

Evolution of test error (solid) and log likelihood (dashed) in r-NeSi during training on MNIST. Both show a strong negative correlation. The vertical line denotes the stopping point.

Figure 10:

Evolution of test error (solid) and log likelihood (dashed) in r-NeSi during training on MNIST. Both show a strong negative correlation. The vertical line denotes the stopping point.

### B.4  Overfitting Control for NeSi

With the tuning protocol shown in section 4.1, we ensure that our networks will not overfit to the test set such that our reported results accurately represent the generalization error. However, overfitting to the training set can still occur and may decrease the overall performance of the networks.

With networks of 10,000 hidden units on MNIST, which learn on only 60,000 training samples, some of the hidden units adapt to represent more rarely seen patterns, while others adapt to represent patterns that are more frequent in the training data. Furthermore, the network learns the frequency at which patterns occur as the distribution $p(c|R)=1K∑kRkc$. Figure 11 displays a random selection of 100 out of the 10,000 fields after training using the r$+$-NeSi algorithm.

Figure 11:

A subset of converged weights learned by r$+$-NeSi in the setting of 100 labels. Shown are 100 of the 10,000 learned weights $Wc,:$ with their learned class belonging $R:,c$ as columns next to the fields (starting with class 0 at the top to class 9 at the bottom of each column). Blue fields have $p(c|R)·N<0.5$. Those are “forgotten fields” of the network whose connections are too weak for further specialization. The red fields have $0.5≤p(c|R)·N<1.5$. Those are fields that are highly specialized to a single pattern in the training set.

Figure 11:

A subset of converged weights learned by r$+$-NeSi in the setting of 100 labels. Shown are 100 of the 10,000 learned weights $Wc,:$ with their learned class belonging $R:,c$ as columns next to the fields (starting with class 0 at the top to class 9 at the bottom of each column). Blue fields have $p(c|R)·N<0.5$. Those are “forgotten fields” of the network whose connections are too weak for further specialization. The red fields have $0.5≤p(c|R)·N<1.5$. Those are fields that are highly specialized to a single pattern in the training set.

Fields colored in blue in Figure 11 have a very low probability of $p(c|R)·N<0.5$, with most of them $p(c|R)$ being close to zero. These fields have ceased to further specialize to respective pattern classes because of sufficiently many other fields that have optimized for a class. They are effectively discarded by the network itself, as the low values in $Rkc$ further suppress the activation of those fields in the recurrent network. With longer training times, $p(c|R)$ of those fields converges to zero, which practically prunes the network to the remaining size. The red fields in Figure 11 have a probability of $0.5≤p(c|R)·N<1.5$ to be activated, which corresponds to approximately one data point in the training set that activates the field. Such weights are often adapted to one single training data point with a very uncommon writing style (like the crooked 7 in the forth column, nineth row) or some kind of preprocessing artifact (like the cropped 3 in the second column, seventh row).

We did control for the effect of rarely active fields (blue and red in Figure 11), especially as some of the fields are clearly overfitted to the training set. For that, we compared an original network of 10,000 fields (i.e., 10,000 middle-layer neurons) with a network for which all fields with activity $p(c|R)·N<1.5$ were removed (around 15% of the 10,000 fields). We observed no significant changes in the test error between the original and the pruned network. The reason is that the pruned fields are rarely activated at test time because of low similarities to test data and strong suppression by the network itself (due to the learned low activation rates during training).

### B.5  Optimization in the Fully Labeled Setting for 20 Newsgroups

In the fully labeled setting on the 20 Newsgroups data set, we can gain a larger benefit out of the recurrence of r-NeSi. Changing the initialization procedure from $Rkc=1/C$ to $Rkc=δkc$ helps to avoid shallow local optima and reaches a test error of $(17.85±0.01)%$. This initialization fixes the class $k$ of subclass $c$ to a single specific class by setting all connections between the first and second hidden layer to other classes to a hard zero. Training with such a weight initialization is, however, useful only when very large numbers of labeled data are available. The top-down label information is then a necessary mechanism to make sure that the middle-layer units learn the appropriate representation of their respective fixed class (e.g., that a middle-layer unit that is fixed to class alt.atheism mainly, or exclusively, learns from data belonging to that class). So instead of first learning representations in the middle layer purely from the data and then learning the classes with respect to these representations from the labels, like the (greedy) ff-NeSi, the r-NeSi algorithm is also able to conversely shape their middle-layer representations in relation to their probability of belonging to the class of the presented data point.

To decide between this initialization procedure in the fully labeled setting and our standard one, we used the fully labeled training set during parameter tuning (again with a half/half split into training and validation set). With the better avoidance of shallow optima by this initialization, lower learning rates $∊W$ were now more beneficial ($∊R$ drops out as free parameter, as the top layer remains fixed). A coarse manual grid search in this setting resulted in optimal parameter values at $A=90,000$ ($A/D≈1.47$) and $∊W=0.02$ (which we chose as lowest search value to restrict computational time), while keeping $C=20$. These results also show that parameter optimization based on each individual label setting and changing the initialization procedure based on label availability could potentially lead to better parameter settings and stronger performance also in the other settings.

## Appendix C:  Detailed Training Results

We performed 100 independent training runs for results obtained on MNIST and 20 Newsgroups in sections 4.3 and 4.2 and 10 independent training runs for the NIST data set in section 4.4 with each of the given networks for each label setting with new randomly chosen, class-balanced labels for each training run. Tables 6 to 18 give a detailed summary of the statistics of the obtained results. They show the mean test error alongside the standard error of the mean (SEM), the standard deviation (in percentage points), as well as the minimal and maximal test error in the given number of runs. For the networks with self-labeling of unlabeled data (ff$+$- and r$+$-NeSi), we show only the semisupervised settings, as they are identical to their respective standard versions in the fully labeled case.

Table 6:
ff-NeSi on 20 Newsgroups.
 Number of Labels Mean Test Error SD Minimum Maximum 20 70.64 $±$ 0.68 6.82 55.35 88.59 40 55.67 $±$ 0.54 5.44 37.53 68.13 200 30.59 $±$ 0.22 2.22 26.97 37.57 800 28.26 $±$ 0.10 1.00 26.68 31.59 2000 27.87 $±$ 0.07 0.74 25.85 30.01 11,269 28.08 $±$ 0.08 0.78 26.29 30.25
 Number of Labels Mean Test Error SD Minimum Maximum 20 70.64 $±$ 0.68 6.82 55.35 88.59 40 55.67 $±$ 0.54 5.44 37.53 68.13 200 30.59 $±$ 0.22 2.22 26.97 37.57 800 28.26 $±$ 0.10 1.00 26.68 31.59 2000 27.87 $±$ 0.07 0.74 25.85 30.01 11,269 28.08 $±$ 0.08 0.78 26.29 30.25
Table 7:
r-NeSi on 20 Newsgroups.
 Number of Labels Mean Test Error SD Minimum Maximum 20 68.68 $±$ 0.77 7.72 49.98 85.48 40 54.24 $±$ 0.66 6.59 37.00 66.76 200 29.28 $±$ 0.21 2.09 25.90 39.60 800 27.20 $±$ 0.07 0.70 25.85 29.41 2000 27.15 $±$ 0.07 0.65 25.77 29.13 11,269 27.28 $±$ 0.07 0.73 26.08 29.82
 Number of Labels Mean Test Error SD Minimum Maximum 20 68.68 $±$ 0.77 7.72 49.98 85.48 40 54.24 $±$ 0.66 6.59 37.00 66.76 200 29.28 $±$ 0.21 2.09 25.90 39.60 800 27.20 $±$ 0.07 0.70 25.85 29.41 2000 27.15 $±$ 0.07 0.65 25.77 29.13 11,269 27.28 $±$ 0.07 0.73 26.08 29.82
Table 8:
ff-NeSi on MNIST.
 Number of Labels Mean Test Error SD Minimum Maximum 10 55.46 $±$ 0.57 5.72 42.49 69.62 20 38.88 $±$ 0.52 5.19 27.86 49.62 100 19.08 $±$ 0.26 2.61 13.31 24.93 600 7.27 $±$ 0.05 0.49 6.01 8.76 1000 5.88 $±$ 0.03 0.31 5.19 6.97 3000 4.39 $±$ 0.02 0.15 4.01 4.89 60,000 3.27 $±$ 0.01 0.08 3.08 3.46
 Number of Labels Mean Test Error SD Minimum Maximum 10 55.46 $±$ 0.57 5.72 42.49 69.62 20 38.88 $±$ 0.52 5.19 27.86 49.62 100 19.08 $±$ 0.26 2.61 13.31 24.93 600 7.27 $±$ 0.05 0.49 6.01 8.76 1000 5.88 $±$ 0.03 0.31 5.19 6.97 3000 4.39 $±$ 0.02 0.15 4.01 4.89 60,000 3.27 $±$ 0.01 0.08 3.08 3.46
Table 9:
r-NeSi on MNIST.
 Number of Labels Mean Test Error SD Minimum Maximum 10 29.61 $±$ 0.57 5.71 20.05 46.05 20 21.21 $±$ 0.34 3.37 13.80 31.81 100 12.43 $±$ 0.15 1.53 9.29 16.25 600 6.94 $±$ 0.05 0.49 5.72 8.44 1000 6.07 $±$ 0.03 0.28 5.24 6.78 3000 4.68 $±$ 0.02 0.19 4.22 5.29 60,000 2.94 $±$ 0.01 0.08 2.75 3.14
 Number of Labels Mean Test Error SD Minimum Maximum 10 29.61 $±$ 0.57 5.71 20.05 46.05 20 21.21 $±$ 0.34 3.37 13.80 31.81 100 12.43 $±$ 0.15 1.53 9.29 16.25 600 6.94 $±$ 0.05 0.49 5.72 8.44 1000 6.07 $±$ 0.03 0.28 5.24 6.78 3000 4.68 $±$ 0.02 0.19 4.22 5.29 60,000 2.94 $±$ 0.01 0.08 2.75 3.14
Table 10:
ff$+$-NeSi on MNIST.
 Number of Labels Mean Test Error SD Minimum Maximum 10 10.91 $±$ 0.86 8.64 3.96 53.15 20 7.23 $±$ 0.35 3.45 4.17 24.82 100 4.96 $±$ 0.08 0.82 3.84 9.13 600 4.08 $±$ 0.02 0.17 3.68 4.73 1000 4.00 $±$ 0.01 0.12 3.76 4.38 3000 3.85 $±$ 0.01 0.11 3.64 4.14
 Number of Labels Mean Test Error SD Minimum Maximum 10 10.91 $±$ 0.86 8.64 3.96 53.15 20 7.23 $±$ 0.35 3.45 4.17 24.82 100 4.96 $±$ 0.08 0.82 3.84 9.13 600 4.08 $±$ 0.02 0.17 3.68 4.73 1000 4.00 $±$ 0.01 0.12 3.76 4.38 3000 3.85 $±$ 0.01 0.11 3.64 4.14
Table 11:
r$+$-NeSi on MNIST.
 Number of Labels Mean Test Error SD Minimum Maximum 10 18.68 $±$ 0.89 8.90 5.06 51.88 20 12.46 $±$ 0.73 7.31 4.89 39.70 100 4.93 $±$ 0.05 0.49 4.26 7.32 600 4.34 $±$ 0.01 0.15 3.87 4.78 1000 4.26 $±$ 0.01 0.12 3.97 4.62 3000 4.05 $±$ 0.01 0.10 3.84 4.29
 Number of Labels Mean Test Error SD Minimum Maximum 10 18.68 $±$ 0.89 8.90 5.06 51.88 20 12.46 $±$ 0.73 7.31 4.89 39.70 100 4.93 $±$ 0.05 0.49 4.26 7.32 600 4.34 $±$ 0.01 0.15 3.87 4.78 1000 4.26 $±$ 0.01 0.12 3.97 4.62 3000 4.05 $±$ 0.01 0.10 3.84 4.29
Table 12:
t-NeSi on MNIST.
 Number of Labels Mean Test Error SD Minimum Maximum 10 7.22 $±$ 0.53 5.33 3.60 26.71 20 6.21 $±$ 0.38 3.84 3.72 26.49 100 4.23 $±$ 0.07 0.68 3.58 6.88 600 3.65 $±$ 0.01 0.12 3.37 3.97 1000 3.63 $±$ 0.01 0.11 3.25 4.02 3000 3.52 $±$ 0.01 0.11 3.23 3.82 60,000 2.94 $±$ 0.01 0.08 2.72 3.12
 Number of Labels Mean Test Error SD Minimum Maximum 10 7.22 $±$ 0.53 5.33 3.60 26.71 20 6.21 $±$ 0.38 3.84 3.72 26.49 100 4.23 $±$ 0.07 0.68 3.58 6.88 600 3.65 $±$ 0.01 0.12 3.37 3.97 1000 3.63 $±$ 0.01 0.11 3.25 4.02 3000 3.52 $±$ 0.01 0.11 3.23 3.82 60,000 2.94 $±$ 0.01 0.08 2.72 3.12
Table 13:
ff$+$-NeSi on NIST Digits.
 Number of Labels Mean Test Error SD Minimum Maximum 10 7.56 $±$ 1.76 5.67 5.52 23.46 20 6.15 $±$ 0.14 0.44 5.49 6.73 100 6.20 $±$ 0.16 0.51 5.49 7.08 600 6.02 $±$ 0.08 0.25 5.72 6.51 1000 6.02 $±$ 0.12 0.38 5.63 6.99 3000 5.70 $±$ 0.03 0.10 5.56 5.89 344,307 5.11 $±$ 0.01 0.03 5.06 5.16
 Number of Labels Mean Test Error SD Minimum Maximum 10 7.56 $±$ 1.76 5.67 5.52 23.46 20 6.15 $±$ 0.14 0.44 5.49 6.73 100 6.20 $±$ 0.16 0.51 5.49 7.08 600 6.02 $±$ 0.08 0.25 5.72 6.51 1000 6.02 $±$ 0.12 0.38 5.63 6.99 3000 5.70 $±$ 0.03 0.10 5.56 5.89 344,307 5.11 $±$ 0.01 0.03 5.06 5.16
Table 14:
r$+$-NeSi on NIST Digits.
 Number of Labels Mean Test Error SD Minimum Maximum 10 9.84 $±$ 2.41 7.61 5.64 34.95 20 8.50 $±$ 2.09 6.62 5.46 27.29 100 6.14 $±$ 0.23 0.72 5.52 7.84 600 5.83 $±$ 0.14 0.45 5.43 6.50 1000 5.94 $±$ 0.12 0.39 5.46 6.49 3000 5.72 $±$ 0.10 0.33 5.52 6.63 344,307 4.52 $±$ 0.01 0.04 4.44 4.56
 Number of Labels Mean Test Error SD Minimum Maximum 10 9.84 $±$ 2.41 7.61 5.64 34.95 20 8.50 $±$ 2.09 6.62 5.46 27.29 100 6.14 $±$ 0.23 0.72 5.52 7.84 600 5.83 $±$ 0.14 0.45 5.43 6.50 1000 5.94 $±$ 0.12 0.39 5.46 6.49 3000 5.72 $±$ 0.10 0.33 5.52 6.63 344,307 4.52 $±$ 0.01 0.04 4.44 4.56
Table 15:
t-NeSi on NIST Digits.
 Number of Labels Mean Test Error SD Minimum Maximum 10 5.71 $±$ 0.42 1.32 4.77 8.72 20 5.23 $±$ 0.15 0.49 4.75 5.88 100 5.26 $±$ 0.23 0.72 4.79 6.95 600 4.84 $±$ 0.02 0.07 4.76 4.93 1000 4.86 $±$ 0.03 0.09 4.69 5.01 3000 4.83 $±$ 0.02 0.08 4.64 4.93 344,307 4.50 $±$ 0.01 0.02 4.46 4.54
 Number of Labels Mean Test Error SD Minimum Maximum 10 5.71 $±$ 0.42 1.32 4.77 8.72 20 5.23 $±$ 0.15 0.49 4.75 5.88 100 5.26 $±$ 0.23 0.72 4.79 6.95 600 4.84 $±$ 0.02 0.07 4.76 4.93 1000 4.86 $±$ 0.03 0.09 4.69 5.01 3000 4.83 $±$ 0.02 0.08 4.64 4.93 344,307 4.50 $±$ 0.01 0.02 4.46 4.54
Table 16:
ff$+$-NeSi on NIST Letters.
 Number of Labels Mean Test Error SD Minimum Maximum 52 55.70 $±$ 0.62 1.96 52.88 58.75 104 51.32 $±$ 0.79 2.49 48.21 55.96 520 46.22 $±$ 0.43 1.37 43.91 48.47 3120 44.24 $±$ 0.23 0.74 43.23 45.49 5200 43.69 $±$ 0.21 0.65 42.53 44.40 15600 42.96 $±$ 0.28 0.88 41.55 44.38 387,361 34.66 $±$ 0.05 0.15 34.45 34.86
 Number of Labels Mean Test Error SD Minimum Maximum 52 55.70 $±$ 0.62 1.96 52.88 58.75 104 51.32 $±$ 0.79 2.49 48.21 55.96 520 46.22 $±$ 0.43 1.37 43.91 48.47 3120 44.24 $±$ 0.23 0.74 43.23 45.49 5200 43.69 $±$ 0.21 0.65 42.53 44.40 15600 42.96 $±$ 0.28 0.88 41.55 44.38 387,361 34.66 $±$ 0.05 0.15 34.45 34.86
Table 17:
r$+$-NeSi on NIST Letters.
 Number of Labels Mean Test Error SD Minimum Maximum 52 64.97 $±$ 0.85 2.70 60.88 69.71 104 60.32 $±$ 0.91 2.86 57.74 65.74 520 54.08 $±$ 0.38 1.21 51.71 55.89 3120 43.73 $±$ 0.15 0.47 42.99 44.62 5200 41.57 $±$ 0.13 0.42 40.90 42.21 15600 37.95 $±$ 0.12 0.38 37.25 38.56 387,361 31.93 $±$ 0.06 0.18 31.63 32.17
 Number of Labels Mean Test Error SD Minimum Maximum 52 64.97 $±$ 0.85 2.70 60.88 69.71 104 60.32 $±$ 0.91 2.86 57.74 65.74 520 54.08 $±$ 0.38 1.21 51.71 55.89 3120 43.73 $±$ 0.15 0.47 42.99 44.62 5200 41.57 $±$ 0.13 0.42 40.90 42.21 15600 37.95 $±$ 0.12 0.38 37.25 38.56 387,361 31.93 $±$ 0.06 0.18 31.63 32.17
Table 18:
t-NeSi on NIST Letters.
 Number of Labels Mean Test Error SD Minimum Maximum 52 52.14 $±$ 1.07 3.39 45.26 56.70 104 48.46 $±$ 0.92 2.90 44.49 53.52 520 45.62 $±$ 0.43 1.37 42.87 47.83 3120 41.87 $±$ 0.32 1.03 39.70 43.48 5200 41.75 $±$ 0.36 1.12 39.77 43.18 15600 41.13 $±$ 0.30 0.96 39.75 42.67 387,361 33.34 $±$ 0.04 0.14 33.12 33.63
 Number of Labels Mean Test Error SD Minimum Maximum 52 52.14 $±$ 1.07 3.39 45.26 56.70 104 48.46 $±$ 0.92 2.90 44.49 53.52 520 45.62 $±$ 0.43 1.37 42.87 47.83 3120 41.87 $±$ 0.32 1.03 39.70 43.48 5200 41.75 $±$ 0.36 1.12 39.77 43.18 15600 41.13 $±$ 0.30 0.96 39.75 42.67 387,361 33.34 $±$ 0.04 0.14 33.12 33.63

## Appendix D:  Tunable Parameters of the Compared Algorithms

We list in Table 19 the tunable parameters of each method compared to in Figures 7 and 8. For some of the methods, this estimate gives only a lower bound on the number of tunable parameters, as parameters of them may have multiple instances—for example, for each added layer in the network. If a parameter was kept constant for all layers, we counted it only as a single parameter, whereas such parameters that had differing values in different layers were counted as multiple parameters. An example is the constant number of hidden units in NN versus the differing numbers in the layers of the CNN. We also counted such parameters that were not (explicitly) optimized in the corresponding papers itself but were taken from other papers (e.g., parameters of the ADAM algorithm), or where the reason for the specific choice is not given (like for specific network architectures).

Table 19:
Tunable Hyperparameters of the Algorithms Compared in Figure 7.
 Method Model Description and Tunable HyperParameters Count SVM Standard supervised support vector machine Soft margin parameter $C$ 1 TSVM Semisupervised transductive support vector machine Soft margin parameter $C$⁠, data-similarity kernel parameter $λ$ 2 NN Supervised neural network using stochastic gradient descent Number of hidden layers (here: 2), number of hidden units (here: same per layer), learning rate(s) $3+$ AGR Anchorgraph; semisupervised large graph with anchor-based label prediction using k-means cluster centers as anchors Number of anchors $m$⁠, number of nearest anchors $s$⁠, regularization parameter $γ$⁠, dimensionality reduction (for acceleration) 3–4 kNN Semisupervised k-nearest neighbors Number of neighbors $k$⁠, weight function, algorithm, power parameter $p$ 4 NeSi (ours) Neural network approximation of hierarchical Poisson mixtures Number of middle-layer units $C$⁠, input normalization constant $A$⁠, learning rates $∊W$ and $∊R$⁠, $BvSB$ threshold $ϑ$ (only for r$+$-, ff$+$-, and t-NeSi), $C'$ (truncation, only for t-NeSi) 4–6 AtlasRBF Manifold learning of Atlas-based kernels for SVMs Chart penalty $λ$⁠, softening parameter $γ$⁠, RBF kernel parameter $σ$⁠, number of neighbors $k$⁠, local manifold dimensionality 5 Em$all$NN Neural network with nonlinear embedding using unlabeled data pairs NN hyperparameters (see above, here: 10 layers), layers to embed (here: all), embedding parameter $λ$⁠, distance parameter $m$ 6 CNN Standard supervised convolutional neural network number of CNN layers (here: 6), Patch size, pooling window size (2nd layer), neighborhood radius (4th layer), 1st, 3rd, 5th and 6th layer units, learning rate $≥9$ M1$+$M2 Generative model (2 hidden layers) parameterized with deep neural networks M1: number of hidden layers (here: 2), number of hidden units per layer, number of samples from posterior, M2: number of hidden layers (here: 1), number of hidden units, $α$⁠, RMSProp: learning rate, first and second momenta $≥10$ DBN-rNCA Deep belief network with regularized nonlinear neighbourhood components analysis; 4 stacks of RBMs, unrolled and finetuned as deep autoencoders Number of layers (here: 4), number of hidden units per layer; RBM learning rate, momentum, weight-decay, RBM epochs, NCA epochs, tradeoff parameter $λ$ $≥11$ EmCNN CNN with nonlinear embedding using unlabeled data pairs CNN hyperparameters (see above), layers to embed (here: 5th layer), embedding parameter $λ$⁠, distance parameter $m$⁠, $≥12$ VAT Virtual adversarial training; standard deep networks with local distributional smoothness (LDS) constraint Number of layers (here: 2–4), number of hidden units per layer, LDS weighting $λ$⁠, magnitude of virtual adversarial perturbation $∊$⁠, iteration times of power method $Ip$⁠, ADAM (Kingma & Ba, 2014): learning rate $α$⁠, $∊ADAM$⁠, exponential decay rates $β1$ and $β2$⁠; batch normalization (Ioffe & Szegedy, 2015): mini-batch size for labeled and mixed set $≥12$ Ladder Per-layer denoising objective on standard deep networks (here: CNNs) Number of hidden layers (here: 5), number of hidden units per layer, noise level $n(l)$⁠, denoising cost multipliers $λ(l)$ for each layer, ADAM (Kingma & Ba, 2014): learning rate $α$⁠, $∊ADAM$⁠, iterations until annealing phase, linear decay rate; batch normalization (Ioffe & Szegedy, 2015): minibatch size $≥18$
 Method Model Description and Tunable HyperParameters Count SVM Standard supervised support vector machine Soft margin parameter $C$ 1 TSVM Semisupervised transductive support vector machine Soft margin parameter $C$⁠, data-similarity kernel parameter $λ$ 2 NN Supervised neural network using stochastic gradient descent Number of hidden layers (here: 2), number of hidden units (here: same per layer), learning rate(s) $3+$ AGR Anchorgraph; semisupervised large graph with anchor-based label prediction using k-means cluster centers as anchors Number of anchors $m$⁠, number of nearest anchors $s$⁠, regularization parameter $γ$⁠, dimensionality reduction (for acceleration) 3–4 kNN Semisupervised k-nearest neighbors Number of neighbors $k$⁠, weight function, algorithm, power parameter $p$ 4 NeSi (ours) Neural network approximation of hierarchical Poisson mixtures Number of middle-layer units $C$⁠, input normalization constant $A$⁠, learning rates $∊W$ and $∊R$⁠, $BvSB$ threshold $ϑ$ (only for r$+$-, ff$+$-, and t-NeSi), $C'$ (truncation, only for t-NeSi) 4–6 AtlasRBF Manifold learning of Atlas-based kernels for SVMs Chart penalty $λ$⁠, softening parameter $γ$⁠, RBF kernel parameter $σ$⁠, number of neighbors $k$⁠, local manifold dimensionality 5 Em$all$NN Neural network with nonlinear embedding using unlabeled data pairs NN hyperparameters (see above, here: 10 layers), layers to embed (here: all), embedding parameter $λ$⁠, distance parameter $m$ 6 CNN Standard supervised convolutional neural network number of CNN layers (here: 6), Patch size, pooling window size (2nd layer), neighborhood radius (4th layer), 1st, 3rd, 5th and 6th layer units, learning rate $≥9$ M1$+$M2 Generative model (2 hidden layers) parameterized with deep neural networks M1: number of hidden layers (here: 2), number of hidden units per layer, number of samples from posterior, M2: number of hidden layers (here: 1), number of hidden units, $α$⁠, RMSProp: learning rate, first and second momenta $≥10$ DBN-rNCA Deep belief network with regularized nonlinear neighbourhood components analysis; 4 stacks of RBMs, unrolled and finetuned as deep autoencoders Number of layers (here: 4), number of hidden units per layer; RBM learning rate, momentum, weight-decay, RBM epochs, NCA epochs, tradeoff parameter $λ$ $≥11$ EmCNN CNN with nonlinear embedding using unlabeled data pairs CNN hyperparameters (see above), layers to embed (here: 5th layer), embedding parameter $λ$⁠, distance parameter $m$⁠, $≥12$ VAT Virtual adversarial training; standard deep networks with local distributional smoothness (LDS) constraint Number of layers (here: 2–4), number of hidden units per layer, LDS weighting $λ$⁠, magnitude of virtual adversarial perturbation $∊$⁠, iteration times of power method $Ip$⁠, ADAM (Kingma & Ba, 2014): learning rate $α$⁠, $∊ADAM$⁠, exponential decay rates $β1$ and $β2$⁠; batch normalization (Ioffe & Szegedy, 2015): mini-batch size for labeled and mixed set $≥12$ Ladder Per-layer denoising objective on standard deep networks (here: CNNs) Number of hidden layers (here: 5), number of hidden units per layer, noise level $n(l)$⁠, denoising cost multipliers $λ(l)$ for each layer, ADAM (Kingma & Ba, 2014): learning rate $α$⁠, $∊ADAM$⁠, iterations until annealing phase, linear decay rate; batch normalization (Ioffe & Szegedy, 2015): minibatch size $≥18$

## Acknowledgments

We acknowledge funding by the German Research Foundation (DFG) in the Priority Program 1527 (Autonomous Learning), grant LU 1196/5-1, and within the Cluster of Excellence Hearing4all (EXC 1077/1). Furthermore, we acknowledge the use of the HPC cluster CARL of Oldenburg University, funded through INST 184/157-1 FUGG, the use of the GPU cluster GOLD, and support by the NVIDIA Corporation for a GPU card donation.

## References

Abbott
,
L. F.
, &
Nelson
,
S. B.
(
2000
).
Synaptic plasticity: Taming the beast
.
Nature Neuroscience
,
3
,
1178
1183
.
Bengio
,
Y.
,
Courville
,
A.
, &
Vincent
,
P.
(
2013
).
Representation learning: A review and new perspectives
.
IEEE Transactions on Pattern Analysis and Machine Intelligence
,
35
(
8
),
1798
1828
.
Bergstra
,
J.
,
Yamins
,
D.
, &
Cox
,
D.
(
2013
).
Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures
. In
Proceedings of the International Conference on Machine Learning
(pp.
115
123
).
Blum
,
M.
,
Floyd
,
R. W.
,
Pratt
,
V.
,
Rivest
,
R. L.
, &
Tarjan
,
R. E.
(
1973
).
Time bounds for selection
.
Journal of Computer and System Sciences
,
7
(
4
),
448
461
.
Bornschein
,
J.
, &
Bengio
,
Y.
(
2015
).
Reweighted wake-sleep
. In
Proceedings of the International Conference on Learning Representations
.
Amsterdam
:
Elsevier
.
Bruna
,
J.
, &
Mallat
,
S.
(
2013
).
Invariant scattering convolution networks
.
IEEE Transactions on Pattern Analysis and Machine Intelligence
,
35
(
8
),
1872
1886
.
Cemgil
,
A. T.
(
2009
).
Bayesian inference for nonnegative matrix factorisation models
.
Computational Intelligence and Neuroscience
,
2009
,
785152
.
Cheng
,
D.
,
Kannan
,
R.
,
Vempala
,
S.
, &
Wang
,
G.
(
2006
).
A divide-and-merge methodology for clustering
.
ACM Transactions on Database Systems
,
31
(
4
),
1499
1525
.
Cireşan
,
D.
,
Meier
,
U.
, &
Schmidhuber
,
J.
(
2012
).
Multi-column deep neural networks for image classification
. In
Proceedings of the Conference on Computer Vision and Pattern Recognition
(pp.
3642
3649
).
Piscataway, NJ
:
IEEE
.
Cohn
,
D. A.
,
Ghahramani
,
Z.
, &
Jordan
,
M. I.
(
1996
).
Active learning with statistical models
.
Journal of Artificial Intelligence Research
,
4
,
129
145
.
Collobert
,
R.
,
Sinz
,
F.
,
Weston
,
J.
, &
Bottou
,
L.
(
2006
).
Large scale transductive SVMs
.
Journal of Machine Learning Research
,
7
,
1687
1712
.
Cortes
,
C.
, &
Vapnik
,
V.
(
1995
).
Support-vector networks
.
Machine Learning
,
20
(
3
),
273
297
.
Dai
,
Z.
,
Exarchakis
,
G.
, &
Lücke
,
J.
(
2013
). What are the invariant occlusive components of image patches? A probabilistic generative approach. In
C. J. C.
Burges
,
L.
Bottou
,
M.
Welling
,
Z.
Ghahramani
, &
K. Q.
Weinberger
(Eds.),
Advances in neural information processing systems
,
26
(pp.
243
251
).
Red Hook, NY
:
Curran
.
Dai
,
Z.
, &
Lücke
,
J.
(
2014
).
Autonomous document cleaning: A generative approach to reconstruct strongly corrupted scanned texts
.
IEEE Transactions on Pattern Analysis and Machine Intelligence
,
36
(
10
),
1950
1962
.
Dayan
,
P.
, &
Abbott
,
L. F.
(
2001
).
Theoretical neuroscience
.
Cambridge, MA
:
MIT Press
.
Duda
,
R. O.
,
Hart
,
P. E.
, &
Stork
,
D. G.
(
2001
).
Pattern classification
(2nd ed.).
New York
:
Wiley-Interscience
.
Forster
,
D.
, &
Lücke
,
J.
(
2017
).
Truncated variational EM for semi-supervised neural simpletrons
. In
Proceedings of the International Joint Conference on Neural Networks
(pp.
3769
3776
).
Piscataway, NJ
:
IEEE
.
Forster
,
D.
,
Sheikh
,
A.-S.
, &
Lücke
,
J.
(
2015
).
Neural simpletrons: Minimalistic directed generative networks for learning with few labels
.
arXiv:1506.08448
.
Gal
,
Y.
, &
Ghahramani
,
Z.
(
2016
).
Bayesian convolutional neural networks with Bernoulli approximate variational inference
.
Gal
,
Y.
,
Islam
,
R.
, &
Ghahramani
,
Z.
(
2017
).
Deep Bayesian active learning with image data
. In
Proceedings of the International Conference on Machine Learning
(pp.
1183
1192
).
Gan
,
Z.
,
Henao
,
R.
,
Carlson
,
D.
, &
Carin
,
L.
(
2015
).
Learning deep sigmoid belief networks with data augmentation
. In
Proceedings of the International Conference on Artificial Intelligence and Statistics
(pp.
268
276
).
Goodfellow
,
I.
,
,
J.
,
Mirza
,
M.
,
Xu
,
B.
,
Warde-Farley
,
D.
,
Ozair
,
S.
, …
Bengio
,
Y.
(
2014
Z.
Ghahramani
,
M.
Welling
,
C.
Cortes
,
N. D.
Lawrence
, &
K. Q.
Weinberger
(Eds.),
Advances in neural information processing systems
,
27
(pp.
2672
2680
).
Red Hook, NY
:
Curran
.
Goodfellow
,
I. J.
,
Courville
,
A.
, &
Bengio
,
Y.
(
2013
).
Joint training deep Boltzmann machines for classification
.
arXiv:1301.3568
.
Grother
,
P. J.
(
1995
).
NIST special database 19 handprinted forms and characters database
.
National Institute of Standards and Technology
.
He
,
K.
,
Zhang
,
X.
,
Ren
,
S.
, &
Sun
,
J.
(
2016
).
Deep residual learning for image recognition
. In
Proceedings of the Conference on Computer Vision and Pattern Recognition
(pp.
770
778
).
Piscataway, NJ
:
IEEE
.
Henniges
,
M.
,
Turner
,
R. E.
,
Sahani
,
M.
,
Eggert
,
J.
, &
Lücke
,
J.
(
2014
).
Efficient occlusive components analysis
.
Journal of Machine Learning Research
,
15
,
2689
2722
.
Hinton
,
G.
,
Deng
,
L.
,
Yu
,
D.
,
Dahl
,
G. E.
,
Mohamed
,
A.-r.
,
Jaitly
,
N.
, …
Kingsburg
,
B.
(
2012
).
Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups
.
IEEE Signal Processing Magazine
,
29
(
6
),
82
97
.
Hinton
,
G. E.
,
Osindero
,
S.
, &
Teh
,
Y.-W.
(
2006
).
A fast learning algorithm for deep belief nets
.
Neural Computation
,
18
(
7
),
1527
1554
.
Holca-Lamarre
,
R.
,
Lücke
,
J.
, &
Obermayer
,
K.
(
2017
).
Models of acetylcholine and dopamine signals differentially improve neural representations
.
Frontiers in Computational Neuroscience
,
11
,
54
.
Huang
,
G.
,
Sun
,
Y.
,
Liu
,
Z.
,
Sedra
,
D.
, &
Weinberger
,
K. Q.
(
2016
).
Deep networks with stochastic depth
. In
Proceedings of the European Conference on Computer Vision
(pp.
646
661
).
Berlin
:
Springer
.
Hughes
,
M. C.
, &
Sudderth
,
E. B.
(
2016
).
Fast learning of clusters and topics via sparse posteriors
.
arXiv:1609.07521
.
Hutter
,
F.
,
Lücke
,
J.
, &
Schmidt-Thieme
,
L.
(
2015
).
Beyond manual tuning of hyperparameters
.
Künstliche Intelligenz
,
29
(
4
),
329
337
.
Ioffe
,
S.
, &
Szegedy
,
C.
(
2015
).
Batch normalization: Accelerating deep network training by reducing internal covariate shift
. In
Proceedings of the International Conference on Machine Learning
(pp.
448
456
).
Ivakhnenko
,
A. G.
, &
Lapa
,
V. G.
(
1965
).
Cybernetic predicting devices
(
Technical report
).
West Lafayette, IN
:
Purdue University School of Electrical Engineering
.
Jordan
,
M. I.
, &
Jacobs
,
R. A.
(
1994
).
Hierarchical mixtures of experts and the EM algorithm
.
Neural Computation
,
6
(
2
),
181
214
.
Joshi
,
A. J.
,
Porikli
,
F.
, &
Papanikolopoulos
,
N.
(
2009
).
Multi-class active learning for image classification
. In
Proceedings of the Conference on Computer Vision and Pattern Recognition
(pp.
2372
2379
).
Piscataway, NJ
:
IEEE
.
Keck
,
C.
,
Savin
,
C.
, &
Lücke
,
J.
(
2012
).
Feedforward inhibition and synaptic scaling: Two sides of the same coin
?
PLoS Computational Biology
,
8
,
e1002432
.
Kim
,
D.-K.
,
Der
,
M.
, &
Saul
,
L. K.
(
2014
).
A gaussian latent variable model for large margin classification of labeled and unlabeled data
. In
Proceedings of the International Conference on Artificial Intelligence and Statistics
(pp.
484
492
).
Kingma
,
D.
, &
Ba
,
J.
(
2014
).
Adam: A method for stochastic optimization
. In
Proceedings of the International Conference on Learning Representations (ICLR)
.
Kingma
,
D. P.
,
Mohamed
,
S.
,
Rezende
,
D. J.
, &
Welling
,
M.
(
2014
). Semi-supervised learning with deep generative models. In
Z.
Ghahramani
,
M.
Welling
,
C.
Cortes
,
N. D.
Lawrence
, &
K. Q.
Weinberger
(Eds.),
Advances in neural information processing systems
(pp.
3581
3589
).
Red Hook, NY
:
Curran
.
Lam
,
T. W.
, &
Ting
,
H. F.
(
2000
).
Selecting the k largest elements with parity tests
.
Discrete Applied Mathematics
,
101
(
1
),
187
196
.
Lang
,
K.
(
1995
).
Newsweeder: Learning to filter netnews
. In
Proceedings of the International Conference on Machine Learning
(pp.
331
339
).
Amsterdam
:
Elsevier
.
Larochelle
,
H.
, &
Bengio
,
Y.
(
2008
).
Classification using discriminative restricted Boltzmann machines
. In
Proceedings of the International Conference on Machine Learning
(pp.
536
543
).
New York
:
ACM
.
Larochelle
,
H.
, &
Murray
,
I.
(
2011
).
The neural autoregressive distribution estimator
. In
Proceedings of the International Conference on Artificial Intelligence and Statistics
(pp.
29
37
).
New York
:
ACM
.
LeCun
,
Y.
,
Bottou
,
L.
,
Bengio
,
Y.
, &
Haffner
,
P.
(
1998
).
Gradient-based learning applied to document recognition
.
Proceedings of the IEEE
,
86
(
11
),
2278
2324
.
Lee
,
D.-H.
(
2013
).
Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks
. In
Workshop on Challenges in Representation Learning
(Vol. 3, p. 2)
.
Lee
,
D. D.
, &
Seung
,
H. S.
(
1999
).
Learning the parts of objects by non-negative matrix factorization
.
Nature
,
401
(
6755
),
788
791
.
Liu
,
W.
,
He
,
J.
, &
Chang
,
S.-F.
(
2010
).
Large graph construction for scalable semi-supervised learning
. In
Proceedings of the International Conference on Machine Learning
(pp.
679
686
). Omnipress.
Lücke
,
J.
(
2016
).
Truncated variational expectation maximization
.
arXiv:1610.03113
.
Lücke
,
J.
, &
Eggert
,
J.
(
2010
).
Expectation Truncation and the benefits of preselection in training generative models
.
Journal of Machine Learning Research
,
11
,
2855
2900
.
Lücke
,
J.
, &
Sahani
,
M.
(
2008
).
Maximal causes for non-linear component extraction
.
Journal of Machine Learning Research
,
9
,
1227
1267
.
Miyato
,
T.
,
Maeda
,
S.-I.
,
Koyama
,
M.
,
Nakae
,
K.
, &
Ishii
,
S.
(
2016
).
Distributional smoothing with virtual adversarial training
. In
Proceedings of the International Conference on Learning Representations
.
Neal
,
R. M.
, &
Hinton
,
G. E.
(
1998
). A view of the EM algorithm that justifies incremental, sparse, and other variants. In
M. I.
Jordan
(Ed.),
Learning in graphical models
(pp.
355
368
).
Berlin
:
Springer
.
Neftci
,
E. O.
,
Pedroni
,
B. U.
,
Joshi
,
S.
,
Al-Shedivat
,
M.
, &
Cauwenberghs
,
G.
(
2015
).
Unsupervised learning in synaptic sampling machines
.
arXiv:1511.04484
.
Nessler
,
B.
,
Pfeiffer
,
M.
,
Buesing
,
L.
, &
Maass
,
W.
(
2013
).
Bayesian computation emerges in generic cortical microcircuits through spike-timing-dependent plasticity
.
PLoS Computational Biology
,
9
(
4
),
e1003037
.
Nessler
,
B.
,
Pfeiffer
,
M.
, &
Maass
,
W.
(
2009
). STDP enables spiking neurons to detect hidden causes of their inputs. In
Y.
Bengio
,
D.
Schuurmans
,
J. D.
Lafferty
,
C. K. I.
Williams
, &
A.
Culotta
(Eds.),
Advances in neural information processing systems
,
22
(pp.
1357
1365
).
Red Hook, NY
:
Curran
.
Patel
,
A. B.
,
Nguyen
,
T.
, &
Baraniuk
,
R. G.
(
2016
). A probabilistic theory of deep learning. In
D. D.
Lee
,
M.
Sugiyama
,
U. V.
Luxburg
,
I.
Guyon
, &
R.
Garnett
(Eds.),
Advances in neural information processing systems
,
29
(pp.
2558
2566
).
Red Hook, NY
:
Curran
.
Pitelis
,
N.
,
Russell
,
C.
, &
Agapito
,
L.
(
2014
). Semi-supervised learning using an unsupervised atlas. In
T.
Calders
,
F.
Espogito
,
E.
Hüllermeier
, &
R.
Meo
(Eds.),
Machine Learning and Knowledge Discovery in Databases
(pp.
565
580
).
Berlin
:
Springer
.
Ranzato
,
M.
, &
Szummer
,
M.
(
2008
).
Semi-supervised learning of compact document representations with deep networks
. In
Proceedings of the International Conference on Machine Learning
(pp.
792
799
).
New York
:
ACM
.
Rasmus
,
A.
,
Berglund
,
M.
,
Honkala
,
M.
,
Valpola
,
H.
, &
Raiko
,
T.
(
2015
). Semi-supervised learning with ladder networks. In
C.
Cortes
,
D.
Lawrence
,
D. D.
Lee
,
M.
Sugiyama
, & R. Garnett, (Eds.),
Advances in neural information processing systems, 28
(pp.
3532
3540
).
Red Hook, NY
:
Curran
.
Rifai
,
S.
,
Dauphin
,
Y. N.
,
Vincent
,
P.
,
Bengio
,
Y.
, &
Muller
,
X.
(
2011
). The manifold tangent classifier. In
J.
Shawe-Taylor
,
R. S.
Zemel
,
P. L.
Bartlett
,
F.
Pereira
, &
K. Q.
Weinberger
(Eds.),
Advances in neural information processing systems
(pp.
2294
2302
).
Red Hook, NY
:
Curran
.
Rosenblatt
,
F.
(
1958
).
The perceptron: A probabilistic model for information storage and organization in the brain
.
Psychological Review
,
65
(
6
),
386
.
Salakhutdinov
,
R.
, &
Hinton
,
G. E.
(
2007
).
Learning a nonlinear embedding by preserving class neighbourhood structure
. In
Proceedings of the International Conference on Artificial Intelligence and Statistics
(pp.
412
419
).
Salimans
,
T.
,
Goodfellow
,
I.
,
Zaremba
,
W.
,
Cheung
,
V.
,
,
A.
, &
Chen
,
X.
(
2016
). Improved techniques for training GANs. In
D. D.
Lee
,
M.
Sugiyama
,
U. V.
Luxburg
,
I.
Guyon
, &
R.
Garnett
(Eds.),
Advances in neural information processing systems
,
29
(pp.
2226
2234
).
Red Hook, NY
:
Curran
.
Saul
,
L. K.
,
Jaakkola
,
T.
, &
Jordan
,
M. I.
(
1996
).
Mean field theory for sigmoid belief networks
.
Journal of Artificial Intelligence Research
,
4
(
1
),
61
76
.
Schmidhuber
,
J.
(
2015
).
Deep learning in neural networks: An overview
.
Neural Networks
,
61
,
85
117
.
Settles
,
B.
(
2011
). Closing the loop: Fast, interactive semi-supervised annotation with queries on features and instances. In
Empirical Methods in Natural Language Processing (EMNLP)
(pp.
1467
1478
).
Stroudsburg, PA
:
Association for Computational Linguistics
.
Sheikh
,
A.-S.
,
Shelton
,
J. A.
, &
Lücke
,
J.
(
2014
).
A truncated EM approach for spike-and-slab sparse coding
.
Journal of Machine Learning Research
,
15
,
2653
2687
.
Shelton
,
J. A.
,
Gasthaus
,
J.
,
Dai
,
Z.
,
Lücke
,
J.
, &
Gretton
,
A.
(
2014
).
GP-select: Accelerating EM using adaptive subspace preselection
.
arXiv:1412.3411
29
(
8
),
2177
2202
(
2017
).
Sparck
Jones
,
K.
(
1972
).
A statistical interpretation of term specificity and its application in retrieval
.
Journal of Documentation
,
28
(
1
),
11
21
.
Srivastava
,
N.
,
Hinton
,
G. E.
,
Krizhevsky
,
A.
,
Sutskever
,
I.
, &
Salakhutdinov
,
R.
(
2014
).
Dropout: A simple way to prevent neural networks from overfitting
.
Journal of Machine Learning Research
,
15
(
1
),
1929
1958
.
Srivastava
,
N.
,
Salakhutdinov
,
R. R.
, &
Hinton
,
G. E.
(
2013
).
Modeling documents with deep Boltzmann machines
. In
Proceedings of the Conference on Uncertainty in Artificial Intelligence
.
Arlington, VA
:
AUAI Press
.
Thornton
,
C.
,
Hutter
,
F.
,
Hoos
,
H. H.
, &
Leyton-Brown
,
K.
(
2013
).
Auto-WEKA: Combined selection and hyperparameter optimization of classification algorithms
. In
Proceedings of the ACM International Conference on Knowledge Discovery and Data Mining
(pp.
847
855
).
New York
:
ACM
.
Triguero
,
I.
,
García
,
S.
, &
Herrera
,
F.
(
2015
).
Self-labeled techniques for semi-supervised learning: Taxonomy, software and empirical study
.
Knowledge and Information Systems
,
42
(
2
),
245
284
.
Van den
Oord
,
A.
, &
Schrauwen
,
B.
(
2014
). Factoring variations in natural images with deep Gaussian mixture models. In
Z.
Ghahramani
,
M.
Welling
,
C.
Cortes
,
N. D.
Lawrence
, &
K. Q.
Weingberger
(Eds.),
Advances in neural information processing systems
,
27
(pp.
3518
3526
).
Red Hook, NY
:
Curran
.
Vapnik
,
V.
(
1998
).
Statistical learning theory
.
New York
:
Wiley
.
Wang
,
S.
, &
Manning
,
C. D.
(
2012
).
Baselines and bigrams: Simple, good sentiment and topic classification
. In
Proceedings of the 50th Meeting of the Association for Computational Linguistics
(pp.
90
94
).
Stroudsburg, PA
:
ACL
.
Weston
,
J.
,
Ratle
,
F.
,
Mobahi
,
H.
, &
Collobert
,
R.
(
2012
). Deep learning via semi-supervised embedding. In
G.
Montavon
,
G. B.
Orr
, &
K.-R.
Müller
(Eds.),
Neural networks: Tricks of the trade
(pp.
639
655
).
Berlin
:
Springer
.
Zhu
,
X.
,
Ghahramani
,
Z.
, &
Lafferty
,
J.
(
2003
).
Semi-supervised learning using gaussian fields and harmonic functions
. In
Proceedings of the International Conference on Machine Learning
(vol. 3, pp.
912
919
).