Skip Nav Destination

*PDF*
*PDF*
*PDF*

Update search

### NARROW

Format

Journal

Date

Availability

1-3 of 3

Kei Uchizawa

Close
**Follow your search**

Access your saved searches in your account

Would you like to receive an alert when new items match your search?

*Close Modal*

Sort by

Journal Articles

Publisher: Journals Gateway

*Neural Computation*(2024) 36 (8): 1541–1567.

Published: 19 July 2024

FIGURES

Abstract

View article
We present an investigation on threshold circuits and other discretized neural networks in terms of the following four computational resources—size (the number of gates), depth (the number of layers), weight (weight resolution), and energy—where the energy is a complexity measure inspired by sparse coding and is defined as the maximum number of gates outputting nonzero values, taken over all the input assignments. As our main result, we prove that if a threshold circuit C of size s , depth d , energy e , and weight w computes a Boolean function f (i.e., a classification task) of n variables, it holds that log ( rk ( f ) ) ≤ e d ( log s + log w + log n ) regardless of the algorithm employed by C to compute f , where rk ( f ) is a parameter solely determined by a scale of f and defined as the maximum rank of a communication matrix with regard to f taken over all the possible partitions of the n input variables. For example, given a Boolean function CD n ( ξ ) = ⋁ i = 1 n / 2 ξ i ∧ ξ n / 2 + i , we can prove that n / 2 ≤ e d ( log s + log w + log n ) holds for any circuit C computing CD n . While its left-hand side is linear in n , its right-hand side is bounded by the product of the logarithmic factors of s , w , n and the linear factors of d , e . If we view the logarithmic terms as having a negligible impact on the bound, our result implies a trade-off between depth and energy: n / 2 needs to be smaller than the product of e and d . For other neural network models, such as discretized ReLU circuits and discretized sigmoid circuits, we also prove that a similar trade-off holds. Thus, our results indicate that increasing depth linearly enhances the capability of neural networks to acquire sparse representations when there are hardware constraints on the number of neurons and weight resolution.

**Includes:**Supplementary data

Journal Articles

Publisher: Journals Gateway

*Neural Computation*(2021) 33 (4): 1037–1062.

Published: 26 March 2021

Abstract

View article
Spatial Monte Carlo integration (SMCI) is an extension of standard Monte Carlo integration and can approximate expectations on Markov random fields with high accuracy. SMCI was applied to pairwise Boltzmann machine (PBM) learning, achieving superior results over those of some existing methods. The approximation level of SMCI can be altered, and it was proved that a higher-order approximation of SMCI is statistically more accurate than a lower-order approximation. However, SMCI as proposed in previous studies suffers from a limitation that prevents the application of a higher-order method to dense systems. This study makes two contributions. First, a generalization of SMCI (called generalized SMCI (GSMCI)) is proposed, which allows a relaxation of the above-mentioned limitation; moreover, a statistical accuracy bound of GSMCI is proved. Second, a new PBM learning method based on SMCI is proposed, which is obtained by combining SMCI and persistent contrastive divergence. The proposed learning method significantly improves learning accuracy.

Journal Articles

Publisher: Journals Gateway

*Neural Computation*(2006) 18 (12): 2994–3008.

Published: 01 December 2006

Abstract

View article
Circuits composed of threshold gates (McCulloch-Pitts neurons, or perceptrons) are simplified models of neural circuits with the advantage that they are theoretically more tractable than their biological counterparts. However, when such threshold circuits are designed to perform a specific computational task, they usually differ in one important respect from computations in the brain: they require very high activity. On average every second threshold gate fires (sets a 1 as output) during a computation. By contrast, the activity of neurons in the brain is much sparser, with only about 1% of neurons firing. This mismatch between threshold and neuronal circuits is due to the particular complexity measures (circuit size and circuit depth) that have been minimized in previous threshold circuit constructions. In this letter, we investigate a new complexity measure for threshold circuits, energy complexity, whose minimization yields computations with sparse activity. We prove that all computations by threshold circuits of polynomial size with entropy O (log n ) can be restructured so that their energy complexity is reduced to a level near the entropy of circuit states. This entropy of circuit states is a novel circuit complexity measure, which is of interest not only in the context of threshold circuits but for circuit complexity in general. As an example of how this measure can be applied, we show that any polynomial size threshold circuit with entropy O (log n ) can be simulated by a polynomial size threshold circuit of depth 3. Our results demonstrate that the structure of circuits that result from a minimization of their energy complexity is quite different from the structure that results from a minimization of previously considered complexity measures, and potentially closer to the structure of neural circuits in the nervous system. In particular, different pathways are activated in these circuits for different classes of inputs. This letter shows that such circuits with sparse activity have a surprisingly large computational power.