Abstract

Deep belief networks (DBN) are generative models with many layers of hidden causal variables, recently introduced by Hinton, Osindero, and Teh (2006), along with a greedy layer-wise unsupervised learning algorithm. Building on Le Roux and Bengio (2008) and Sutskever and Hinton (2008), we show that deep but narrow generative networks do not require more parameters than shallow ones to achieve universal approximation. Exploiting the proof technique, we prove that deep but narrow feedforward neural networks with sigmoidal units can represent any Boolean expression.

1  Introduction

Sigmoidal belief networks are generative probabilistic models with hidden variables organized in layers, with each hidden variable conditionally binomial given the values of variables in the layer above and the conditional probability taking the form of a traditional sigmoidal neuron Neal (1992). Upper layers represent more “abstract” concepts that explain the input observation x, whereas lower layers are expected to extract “low-level features” from x.

Deep belief networks (DBN; Hinton, Osindero, & Teh, 2006) are particular sigmoidal belief networks for which a clever and successful learning algorithm has been proposed, using as a building block a restricted Boltzmann machine (RBM; Smolensky, 1986; Freund & Haussler, 1991), representing one layer of the model. Although computing the exact log-likelihood gradient of an RBM is intractable, RBMs have been successfully trained using estimators of the log-likelihood gradient such as the contrastive divergence algorithm (Hinton, 2002; Hinton et al., 2006) and the persistent contrastive divergence algorithm (Tieleman, 2008). The algorithm proposed in Hinton et al. (2006) to train a DBN is a greedy layer-wise training algorithm in which the kth layer is first trained as an RBM modeling the output (samples from the posterior) of layer k − 1. This greedy layer-wise strategy has been found to work for other similar unsupervised models for deep architectures (Bengio, 2009), based on improved auto-encoder variants (Bengio, Lamblin, Popovici, & Larochelle, 2007; Ranzato, Poultney, Chopra, & LeCun, 2007; Vincent, Larochelle, Bengio, & Manzagol, 2008). One of the motivations for deep architectures is that they can sometimes be exponentially more efficient than shallow ones in terms of number of elements needed to represent a function. More precisely, there are functions that can be represented compactly with a neural network of depth k but that would require exponential size (with respect to input size) networks of depth k − 1 (Hastad & Goldmann, 1991; Bengio, 2009). This begs the question: Can we have guarantees of the representational abilities of models with potentially deep architectures such as DBNs? Sutskever and Hinton (2008) showed that a deep but narrow DBN (with only n + 1 units per layer and n binary inputs) can represent any distribution on its input, with no more than 3 × 2n layers. Unfortunately, this makes the number of parameters on the order of 3(n + 1)22n, larger than the number of parameters required by a single-level but very fat DBN (i.e., an RBM), which is on the order of n2n. This is the starting point for this letter.

The main results are the following. If n is a power of 2 (n = 2t), then a DBN composed of layers of size n is a universal approximator for distributions over {0, 1}n. This improves on the 3 × 2n upper bound on the number of layers already shown in Sutskever and Hinton (2008) and makes the number of parameters of a DBN universal approximator no worse than the number of parameters of a single RBM. It remains to be shown whether this is also a lower bound on the number of parameters required to achieve universal approximation. If it were true, it would imply (as we expected) that the large efficiency gains one can potentially obtain with DBNs are not universal but obtainable only for some specific target functions.

Using the same technique, the letter also shows that a deep but narrow feedforward deterministic neural network can represent any function from {0, 1}n to {0, 1}, a slight improvement over the result proved in Rojas (2003).

2  Deep Belief Nets

In this section we briefly review the deep belief net (DBN) model as proposed in Hinton et al. (2006) and introduce notation for the rest of the letter.

Let hi represent the vector of hidden variables at layer i. The model is parameterized as follows:
formula
where all the conditional layers P(hihi+1) are factorized conditional distributions for which the computation of probability and sampling is very easy. In Hinton et al. (2006), one considers the hidden layer hi a binary random vector with elements hij and
formula
2.1
with element hij a stochastic neuron or unit, whose binary activation is 1 with probability
formula
2.2
where sigm(a) = 1/(1 + exp(−a)) is the usual sigmoidal activation function, the bij are called the biases (for unit j of layer i), and Wi is called the weight matrix for layer i. If we denote h0 = x, the generative model for the first layer P(xh1) also follows equations 2.1 and 2.2. The joint distribution P(h, hℓ−1) of the top two layers is a restricted Boltzmann machine (RBM), described in Hinton et al. (2006). A DBN is thus a particular kind of sigmoidal belief network where the top level prior comes from an RBM (see Figure 1).
Figure 1:

Graphical representation of a deep belief network. The connection between the top layers is undirected, and the connections between all the lower layers are directed.

Figure 1:

Graphical representation of a deep belief network. The connection between the top layers is undirected, and the connections between all the lower layers are directed.

3  Gray Code Deep Belief Network

In this section we use one or more Gray code sequences (see section 3.2 for a definition of such sequences) in order to capture arbitrary discrete distributions with a DBN. This is inspired by the work of Sutskever and Hinton (2008) in which, by adding layers, one constructively changes the probability for one of the 2n configurations so as to produce the desired distribution. To avoid confusion, we used different terms for probabilistic and deterministic changes of bits:

  • • 

    Switch refers to a deterministic change from 0 to 1 (or from 1 to 0).

  • • 

    Flip refers to a probabilistic change from 0 to 1 (or from 1 to 0).

Therefore, we shall say, “bit k is switched from 0 to 1” and “bit k is flipped from 0 to 1 with probability p.”

3.1  Overview.

Let us assume we are given an arbitrary target distribution p* over binary vectors of size n, which we want to capture with a DBN. The method proposed in Sutskever and Hinton (2008) is the following:

  • • 

    Define an arbitrary sequence of binary vectors in {0, 1}n.

  • • 

    Let the top-level RBM (between layers and ) assign probability 1 to a1.

  • • 

    Using a specific sigmoid belief network composed of three layers, generate a2 with probability 1 − p*(a1) and a1 with probability p*(a1) (specific details on the architecture of such a network are to be found in their paper), yielding the correct probability for a1. We shall refer to this operation as a transfer of probability mass from a1 to a2.

  • • 

    The subsequent three-layer sigmoid belief network acts as follows:

    1. If the vector on is a2, transfer 1 − p*(a1) − p*(a2) of its probability mass to a3.

    2. Otherwise, copy the vector on to .

  • • 

    Continue in the same way, transferring each time probability mass from ak to ak+1 vectors while leaving the other vectors unchanged.

At the end of this procedure, all the mass has been appropriately transferred to the , and the desired distribution p* is obtained.

3.2  Using Gray Codes for Sharing.

Following Sutskever and Hinton (2008), all that is required to build a universal approximator is the ability to transfer probability mass from ak to ak+1 while leaving the probability of the other vectors unchanged, and this for all values of k. The goal of the following sections and associated theorems, and one of the contributions of this letter, is to show how, given appropriate sequences , we can implement this sharing in an efficient way, that is, using a one-layer sigmoid belief network of size n instead of a three-layer network of size n + 1.

The sequences we will use are so-called Gray codes, which are sequences such that

  • • 

    k{ak} = {0, 1}n

  • • 

    ks.t.2 ⩽ k ⩽ 2n, ‖akak−1H = 1 where ‖ · ‖H is the Hamming distance,

where ∪ indicates logical or. There are many such codes (Gray, 1953).

3.3  Single Sequence.

Here we go through the 2n configurations in a particular order, following a Gray code, such that only one bit is changed at a time.

Let us consider two consecutive layers h and v (i.e., there is some r such that h = hr+1 and v = hr) of size n with Wij the weight linking unit vi to unit hj, bi the bias of unit vi, and w a positive scalar.

We first provide a reminder that for every positive scalar ε (0 < ε < 1), there is a weight vector Wi,: and a real bi such that (that is, the ith bit of h is copied to v with probability 1 − ε). Indeed, setting:

  • • 

    Wii = 2w

  • • 

    Wij = 0 for ij

  • • 

    bi = −w

yields a total input to unit vi of
formula
3.1
Therefore, if , we have P(vi = hih) = (1 − ε).

Having proven this, we move on to the next, less obvious, theorem in order to control the transfer of probability mass for one input pattern with a single added layer.

Theorem 1.

Let at be an arbitrary binary vector in {0, 1}n with its last bit equal to 0 and p a scalar. For every positive scalar ε (0 < ε < 1), there is a weight vector Wn,: and a real bn such that:

  • • 

    If the binary vector h is not equal to at, the last bit remains unchanged with probability greater than or equal to 1 − ε, that is, P(vn = hnhat)>(1 − ε).

  • • 

    If the binary vector h is equal to at, its last bit is switched from 0 to 1 with probability .

Proof.

For simplicity and without loss of generality, we will assume that the first k bits of at are equal to 1 and that the remaining nk are equal to 0, with k < n.

We define the weights and biases as follows:

  • • 

    Wnj = w, 1 ⩽ jk.

  • • 

    Wnj = −w, k + 1 ⩽ jn − 1.

  • • 

    Wnn = nw.

  • • 

    bn = −kw + p.

The total input to vn is
formula
3.2
If h = at, then
formula
3.3

Otherwise, there are two possibilities:

  1. hn = 0, in which case I(vn, h) ⩽ −w + p.

  2. hn = 1, in which case I(vn, h) ⩾ w + p.

Again, if w is greater than and h is different from at, then we have P(vn = hn|hat)>(1 − ε).

Summing up, the transformation performed by these parameters is:

  • • 

    If the vector h is different from at, leave the last bit unchanged with probability greater than 1 − ε.

  • • 

    If the vector h is equal to at, flip its last bit from 0 to 1 with probability sigm(p).

It is easy to change the parameters so that the flip would be from 1 to 0. We would simply need to set:

  • • 

    Wnj = −w, 1 ⩽ jk.

  • • 

    Wnj = w, k + 1 ⩽ jn − 1.

  • • 

    Wnn = nw.

  • • 

    bn = (kn)w + p.

This could, of course, be done with any bit and not just the last one.

We have proven that for any layer, it is possible to keep all the bits but one unchanged (with a probability arbitrarily close to 1) and to change the remaining one (with some probability) only when h matches a certain vector. Consequently, if we find a sequence of such that the difference between ai and ai+1 is only one bit, following the proof of Sutskever and Hinton (2008), we will have built a universal approximator. A Gray code is such a sequence.

3.4  Multiple Sequences.

The previous method still requires 2n + 1 layers of size n, which brings the total number of parameters to n2 · (2n + 1), approximately n2 times more than the number of degrees of freedom of the distribution and n times more than the number of parameters required to model the distribution with a single RBM (see Le Roux & Bengio, 2008, for a proof).

The reader may have guessed where this factor n can be gained: instead of changing only one bit per layer (of size n), we should be able to change many of them, on the order of n. It would therefore be useful to build layers able to move from the kth vector to the k + 1th vector of n different Gray codes rather than just one. If we manage to do so, at every layer, n new vectors will have the correct probability mass, making it necessary to have only layers.

To achieve that goal, we will need to build a slightly more complicated layer. Let us again consider a sequence of two consecutive layers h and v of size n with Wij the weight linking unit vi to unit hj, bi the bias of unit vi and w a positive scalar.

Theorem 1 showed that a sequence of two consecutive layers could keep all the vectors but one unchanged (with probability arbitrarily close to 1) while flipping one bit of the last possible vector with some arbitrary probability. We will now show that what theorem 1 achieved with one vector can be achieved with two, provided the Hamming distance between these two vectors is exactly one.

Theorem 2.

Let at be an arbitrary binary vector in {0, 1}n, with the last bit equal to 0 and ct the vector obtained when switching the first bit of at. Let p0 and p1 be two scalars and ε a positive scalar (0 < ε < 1). Then there is a weight vector Wn,: and a scalar bn such that:

  • • 

    If the vector h is not equal to at or to ct, the last bit remains unchanged with probability greater than 1 − ε, that is, P(vn = hnh) ⩾ (1 − ε).

  • • 

    If the vector h is equal to at, its last bit is flipped from 0 to 1 with probability sigm(p0).

  • • 

    If the vector h is ct, its last bit is flipped from 0 to 1 with probability sigm(p1).

Proof.

Again, for simplicity and without loss of generality, we will assume that the first k bits of at are equal to 1 and that the remaining nk are equal to 0, with k < n.

We now define the weights and biases as follows:

  • • 

    Wn1 = p0p1.

  • • 

    Wnj = w, 2 ⩽ jk.

  • • 

    Wnj = −w, k + 1 ⩽ jn − 1.

  • • 

    Wnn = nw.

  • • 

    bn = −(k − 1)w + p1.

The total input to vn is
formula
3.4
If h is equal to at, then
formula
3.5
If h is equal to ct, then
formula
3.6

Otherwise there are two possibilities:

  1. hn = 0, in which case I(vn, h) ⩽ −w + max(p0, p1).

  2. hn = 1, in which case I(vn, h) ⩾ 2w + min(p0, p1).

If w is equal to , we have
formula
Therefore, if h is different from at and from ct, then P(vn = hnh) ⩾ 1 − ε.

Figure 2 shows such a layer.

Figure 2:

Representation of the layers used in theorem 2.

Figure 2:

Representation of the layers used in theorem 2.

We will now consider n Gray code sequences in parallel, allowing ourselves to transfer probability mass to n new vectors at each layer. We will focus on sequences of n bits where n is a power of 2, that is, n = 2t.

Theorem 3.

Let n = 2t. There exist n sequences of vectors of n bits Si, 0 ⩽ i ⩽ n − 1 composed of vectors Si,k, satisfying the following conditions:

  1. {S0, …, Sn−1} is a partition of the set of all vectors of n bits.

  2. For every i in {0, …, n − 1} and every k in , the Hamming distance between Si,k and Si,k+1 is 1.

  3. For every {i, j} in {0, …, n − 1}2 such that i ≠ j, and for every k in , the bit switched between Si,k and Si,k+1 and the bit switched between Sj,k and Sj,k+1 are different, unless the Hamming distance between Si,k and Sj,k is 1.

Proof.
We will prove this theorem by construction. Let Gnt be a Gray code over nt bits and Gint the same Gray code where every vector has been shifted by i bits to the right (the i right-most bits being at the beginning of the vector)—for instance,
formula
The first t bits of every vector in the sequence Si will be the binary representation of i over t bits. For , the last nt bits of Si will be Gint. For , the last nt bits of Si will be . We emphasize that one should not confuse the index of the sequence (which runs from 0 to n − 1) with the shift of the Gray code (which runs from 0 to ). Therefore, no sequence is shifted by more than bits to the right. Here are the four sequences for t = 2:
formula
One can see that condition 1 is satisfied. Indeed, let x be a vector over n bits. Let i be the value represented by its first t bits. Since 0 ⩽ in − 1 (because n = 2t), the first t bits of x match the first t bits of every vector in Si. Then, no matter what its remaining nt bits are, they will appear exactly once in the code Gint since such a Gray code lists all the vectors of nt bits. Therefore, x will appear exactly once in Si and will not appear in the other sequences.

Condition 2 is trivially satisfied by construction since, within each sequence, the first t bits do not change and the last nt bits form a Gray code.

Since a Gray code changes only one bit at a time, for every k in 1, …, 2nt − 1, the bit change between the kth vector and the k + 1th vector of Gint and between the kth vector and the k + 1th vector of Gjnt is different, unless . Since and for t ⩾ 1, we have only for pairs of sequences , . Such sequences share the same Gray code on the last nt bits, and their first t bits differ only in one position (the first one). Therefore, condition 3 is also satisfied.

Before proving the last theorem, we introduce the following lemma, which gives us the correct probabilities to assign to each vector at each layer:

Lemma 1.

Let p* be an arbitrary distribution over vectors of n bits, where n is again a power of two. A DBN with layers such that:

  1. For each i, 0 ⩽ i ⩽ n − 1, the top RBM between layers and assigns probability ∑kp*(Si,k) to Si,1 where the Si,k are the same as in theorem 3.

  2. For each i, 0 ⩽ i ⩽ n − 1 and each k, , we have:
    formula
    formula
  3. For each k, , we have:
    formula

has p* as its marginal distribution overh0.

Proof.

Let x be an arbitrary vector over n bits. According to theorem 3, there is a pair (i, k) such that x = Si,k. This DBN is such that, for all i and all k, if , then either or . Therefore, to have h0 = Si,k, all the hidden layers must contain a vector belonging to the ith sequence. In fact, there is only one sequence that can lead to h0 = Si,k:

  • • 

    .

  • • 

    .

The marginal probability of h0 = Si,k is therefore the probability of such a sequence, which is equal to
formula
3.10
formula
3.11
formula
3.12
The last result stems from the cancellation of consecutive terms in the product. This concludes the proof.

This brings us to the last theorem:

Theorem 4.

If n = 2t, a DBN composed of layers of size n is a universal approximator of distributions over vectors of size n.

Proof.

Using lemma 1, we now show that it is possible to construct such a DBN. First, Le Roux and Bengio (2008) showed that an RBM with n hidden units can model any distribution that assigns a nonzero probability to at most n vectors. Property 1 of lemma 1 can therefore be achieved.

All the subsequent layers are as follows:

  • • 

    At each layer, the first t bits of hk+1 are copied to the first t bits of hk with probability arbitrarily close to 1. This is possible as proven in section 3.3.

  • • 

    At each layer, n/2 of the remaining nt bits are potentially changed to move from one vector in a Gray code sequence to the next with the correct probability (as defined in lemma 1). Each of these n/2 bits will change only if the vector on hk+1 matches one of two possibilities (see theorem 3), which is possible (see theorem 2).

  • • 

    The remaining n/2 − t bits are copied from hk+1 to hk with probability arbitrarily close to 1.

Such layers are arbitrarily close to fulfilling the requirements of the second property of lemma 1.

4  Universal Discriminative model

In this section we exploit the proof technique developed above in order to prove universal approximation properties for deep but narrow feedforward neural network binary classifiers with binary inputs, that is, every function from Hn = {0, 1}n to {0, 1} can be modeled by a feedforward neural network composed of 2n−1 + 1 layers of size n.

The universal approximation property of deep and narrow feedforward neural networks has already been proven in Rojas (2003), which shows that one may solve any two-class classification problem using nested convex polytopes, each of these being modeled by a stack of perceptrons. The final result is that one can model any function from to {0, 1} using a feedforward neural network, provided that:

  • • 

    Each layer receives one bit of information from the layer below.

  • • 

    Each layer is connected to the input.

If we were to build an equivalent network where each layer is connected only to the layer below, then we would need n + 1 hidden units (n for the input and one for the extra bit of information provided by the layer below in his architecture) per layer. This section tries to prove the same property with hidden layers of size n.

As in section 3.3, we first consider a sequence of two consecutive layers h and v of size n with Wij the weight linking unit vi to unit hj and bi the bias of unit vi. The model is a sigmoid belief network directed from h to v.

Theorem 5 (arbitrary projection).
Let W0,: be a vector of , b0 a scalar, ε a positive scalar (0 < ε < 1), and
formula
4.1
where Hn is the binary hypercube in dimension n.
Then, for all i, 1 ⩽ i ⩽ n, there exists a weight vector Wi,: and a scalar bi such that
formula

Proof.
Let us define
formula
Since S is a finite set, t1 is strictly positive. Since S is included in Hn, . Let w be a positive scalar. Defining the weights and bias as follows,
  • • 

  • • 

  • • 

    , ji,

we have
formula
Therefore,
formula
The last equation stems from the fact that if h is not in S, then WT0,:h + b0 ⩽ 0.

Since , is always strictly positive no matter what the value of hi is. Thus, if h is in S, the argument of the sigmoid is always strictly positive.

If h is not in S and hi = 0, then . If h is not in S and hi = 1, then .

When w tends to +∞, these probabilities tend to 1. Therefore, for all ε such that 0 < ε < 1, there exists a scalar C such that if w>C, these probabilities are larger than 1 − ε.

It is trivial to adapt theorem 5 so that
formula

Therefore, using this strategy for 1 ⩽ in, we can apply the following transformation at every layer:

  • • 

    Define a vector W0,: and a bias b0.

  • • 

    Define S = {hHn|WT0,:h + b0>0.}.

  • • 

    Choose an h0 in Hn.

  • • 

    For every h in S, map h to h0.

  • • 

    For every h not in S, map h to itself.

In the following theorem, and until the last stage, we shall use sets S, which contain only one vector h. This allows us to prove the following theorem:

Theorem 6 (universal discriminators).

A neural network with 2n−1 + 1 layers of n units with the sigmoid as transfer function can model any nonconstant function f from Hn to {0, 1} arbitrarily well.

Proof.

Let N0 be the number of vectors h such that f(h) = 0 and N1 be the number of vectors h such that f(h) = 1. We therefore have:

  • • 

    N0 + N1 = 2n.

  • • 

    min(N0, N1) ⩽ 2n−1.

Let us assume that N0N1 (and, subsequently, N0 ⩽ 2n−1). Let h0 be a vector to be mapped to 0. At every layer, we will pick an arbitrary binary vector h such that hh0 and f(h) = 0 and map it to h0, leaving the other vectors unchanged. This is possible using theorem 5. Once all the vectors to be mapped to 0 have been mapped to h0 (which requires at most 2n−1 layers, including the input layer), the hyperplane separating h0 from all the other vectors of Hn performs the correct mapping.

5  Conclusion

Despite the surge in interest for deep networks in recent years, little is known about their theoretical power. We have introduced a proof technique based on Gray codes that allows improving on a previous theorem (Sutskever & Hinton, 2008) regarding the representational power of deep but narrow sigmoidal belief networks (such as DBNs; Hinton et al., 2006). Instead of 3 × 2n layers of size n, the bound presented here involves layers of size n (i.e., n2 + n2n + 2n + n parameters). We do not know if this is the lowest achievable number of layers. Noticing that only half of the units at each layer may change, we believe this bound could be improved by a factor of 2 or less. One important thing to notice is that this is, perhaps unsurprisingly, of the same order of magnitude as the maximum number of parameters required in an RBM to model any distribution. This leads to much more complex and yet more interesting questions. For a given number of parameters, which architecture can best represent distributions of interest? Is the representational power of DBNs more concentrated around real-world distributions when one has access to only a limited number of parameters (i.e., a limited number of training examples)?

Finally, exploiting the same proof technique, we also showed that deep but narrow deterministic networks (with no more than 2n−1 + 1 layers of size n) can represent any binary classifier on n-dimensional binary vectors.

References

Bengio
,
Y.
(
2009
).
Learning deep architectures for AI
.
Foundations and Trends in Machine Learning
,
2
,
1
127
.
Bengio
,
Y.
,
Lamblin
,
P.
,
Popovici
,
D.
, &
Larochelle
,
H.
(
2007
).
Greedy layer-wise training of deep networks
. In
B. Schölkopf, J. Platt, & T. Hoffman
(Eds.),
Advances in neural information processing systems
,
19
(pp.
153
160
).
Cambridge, MA
:
MIT Press
.
Freund
,
Y.
, &
Haussler
,
D.
(
1991
).
Unsupervised learning of distributions of binary vectors using 2-layer networks
. In
J. Moody, S. J. Hanson, & R. Lippmann
(Eds.),
Advances in neural information processing systems
,
4
.
San Francisco
:
Morgan Kaufmann
.
Gray
,
F.
(
1953
).
Pulse code communication
.
U.S. Patent 2,632,058
.
Hastad
,
J.
, &
Goldmann
,
M.
(
1991
).
On the power of small-depth threshold circuits
.
Computational Complexity
,
1
,
113
129
.
Hinton
,
G. E.
(
2002
).
Training products of experts by minimizing contrastive divergence
.
Neural Computation
,
14
,
1771
1800
.
Hinton
,
G. E.
,
Osindero
,
S.
, &
Teh
,
Y.
(
2006
).
A fast learning algorithm for deep belief nets
.
Neural Computation
,
18
,
1527
1554
.
Le Roux
,
N.
, &
Bengio
,
Y.
(
2008
).
Representational power of restricted Boltzmann machines and deep belief networks
.
Neural Computation
,
20
(
6
),
1631
1649
.
Neal
,
R.
(
1992
).
Connectionist learning of belief networks
.
Artificial Intelligence
,
56
,
71
113
.
Ranzato
,
M.
,
Poultney
,
C.
,
Chopra
,
S.
, &
LeCun
,
Y.
(
2007
).
Efficient learning of sparse representations with an energy-based model
.
In B. Schölkopf, J. Platt, & T. Hoffman
(Eds.),
Advances in neural information processing systems
,
19
.
Cambridge, MA
:
MIT Press
.
Rojas
,
R.
(
2003
).
Networks of width one are universal classifiers
. In
International Joint Conference on Neural Networks
(Vol.
4
, pp.
3124
3127
).
New York
:
Elsevier
.
Smolensky
,
P.
(
1986
).
Information processing in dynamical systems: Foundations of harmony theory
. In
D. E. Rumelhart & J. L. McClelland
(Eds.),
Parallel distributed processing
(Vol.
1
, pp.
194
281
).
Cambridge, MA
:
MIT Press
.
Sutskever
,
I.
, &
Hinton
,
G. E.
(
2008
).
Deep, narrow sigmoid belief networks are universal approximators
.
Neural Computation
,
20
(
11
),
2629
2636
.
Tieleman
,
T.
(
2008
).
Training restricted Boltzmann machines using approximations to the likelihood gradient
. In
Proceedings of the International Conference on Machine Learning
(Vol.
25
).
Madison, WI
:
Omnipress
.
Vincent
,
P.
,
Larochelle
,
H.
,
Bengio
,
Y.
, &
Manzagol
,
P.-A.
(
2008
).
Extracting and composing robust features with denoising autoencoders
. In
Proceedings of the Twenty-Fifth International Conference on Machine Learning (ICML'2008)
.
Madison, WI
:
Omnipress
.