Skip Nav Destination

*PDF*
*PDF*
*PDF*
*PDF*
*PDF*
*PDF*
*PDF*
*PDF*

Update search

### NARROW

Format

Journal

TocHeadingTitle

Date

Availability

1-8 of 8

Jiří Šíma

Close
**Follow your search**

Access your saved searches in your account

Would you like to receive an alert when new items match your search?

*Close Modal*

Sort by

Journal Articles

Publisher: Journals Gateway

*Neural Computation*(2014) 26 (5): 953–973.

Published: 01 May 2014

FIGURES

Abstract

View article
Recently a new so-called energy complexity measure has been introduced and studied for feedforward perceptron networks. This measure is inspired by the fact that biological neurons require more energy to transmit a spike than not to fire, and the activity of neurons in the brain is quite sparse, with only about 1% of neurons firing. In this letter, we investigate the energy complexity of recurrent networks, which counts the number of active neurons at any time instant of a computation. We prove that any deterministic finite automaton with m states can be simulated by a neural network of optimal size with the time overhead of per one input bit, using the energy O(e) , for any e such that and e=O(s) , which shows the time-energy trade-off in recurrent networks. In addition, for the time overhead satisfying , we obtain the lower bound of on the energy of such a simulation for some constant c>0 and for infinitely many s . Abstract Recently a new so-called energy complexity measure has been introduced and studied for feedforward perceptron networks. This measure is inspired by the fact that biological neurons require more energy to transmit a spike than not to fire, and the activity of neurons in the brain is quite sparse, with only about 1% of neurons firing. In this letter, we investigate the energy complexity of recurrent networks, which counts the number of active neurons at any time instant of a computation. We prove that any deterministic finite automaton with m states can be simulated by a neural network of optimal size with the time overhead of per one input bit, using the energy O(e) , for any e such that and e=O(s) , which shows the time-energy trade-off in recurrent networks. In addition, for the time overhead satisfying , we obtain the lower bound of on the energy of such a simulation for some constant c>0 and for infinitely many s . Abstract Recently a new so-called energy complexity measure has been introduced and studied for feedforward perceptron networks. This measure is inspired by the fact that biological neurons require more energy to transmit a spike than not to fire, and the activity of neurons in the brain is quite sparse, with only about 1% of neurons firing. In this letter, we investigate the energy complexity of recurrent networks, which counts the number of active neurons at any time instant of a computation. We prove that any deterministic finite automaton with m states can be simulated by a neural network of optimal size with the time overhead of per one input bit, using the energy O(e) , for any e such that and e=O(s) , which shows the time-energy trade-off in recurrent networks. In addition, for the time overhead satisfying , we obtain the lower bound of on the energy of such a simulation for some constant c>0 and for infinitely many s . Abstract Recently a new so-called energy complexity measure has been introduced and studied for feedforward perceptron networks. This measure is inspired by the fact that biological neurons require more energy to transmit a spike than not to fire, and the activity of neurons in the brain is quite sparse, with only about 1% of neurons firing. In this letter, we investigate the energy complexity of recurrent networks, which counts the number of active neurons at any time instant of a computation. We prove that any deterministic finite automaton with m states can be simulated by a neural network of optimal size with the time overhead of per one input bit, using the energy O(e) , for any e such that and e=O(s) , which shows the time-energy trade-off in recurrent networks. In addition, for the time overhead satisfying , we obtain the lower bound of on the energy of such a simulation for some constant c>0 and for infinitely many s . Abstract Recently a new so-called energy complexity measure has been introduced and studied for feedforward perceptron networks. This measure is inspired by the fact that biological neurons require more energy to transmit a spike than not to fire, and the activity of neurons in the brain is quite sparse, with only about 1% of neurons firing. In this letter, we investigate the energy complexity of recurrent networks, which counts the number of active neurons at any time instant of a computation. We prove that any deterministic finite automaton with m states can be simulated by a neural network of optimal size with the time overhead of per one input bit, using the energy O(e) , for any e such that and e=O(s) , which shows the time-energy trade-off in recurrent networks. In addition, for the time overhead satisfying , we obtain the lower bound of on the energy of such a simulation for some constant c>0 and for infinitely many s . Abstract Recently a new so-called energy complexity measure has been introduced and studied for feedforward perceptron networks. This measure is inspired by the fact that biological neurons require more energy to transmit a spike than not to fire, and the activity of neurons in the brain is quite sparse, with only about 1% of neurons firing. In this letter, we investigate the energy complexity of recurrent networks, which counts the number of active neurons at any time instant of a computation. We prove that any deterministic finite automaton with m states can be simulated by a neural network of optimal size with the time overhead of per one input bit, using the energy O(e) , for any e such that and e=O(s) , which shows the time-energy trade-off in recurrent networks. In addition, for the time overhead satisfying , we obtain the lower bound of on the energy of such a simulation for some constant c>0 and for infinitely many s .

Journal Articles

Publisher: Journals Gateway

*Neural Computation*(2009) 21 (2): 583–617.

Published: 01 February 2009

FIGURES
| View All (10)

Abstract

View article
The important task of generating the minimum number of sequential triangle strips (tristrips) for a given triangulated surface model is motivated by applications in computer graphics. This hard combinatorial optimization problem is reduced to the minimum energy problem in Hopfield nets by a linear-size construction. In particular, the classes of equivalent optimal stripifications are mapped one to one to the minimum energy states reached by a Hopfield network during sequential computation starting at the zero initial state. Thus, the underlying Hopfield network powered by simulated annealing (i.e., Boltzmann machine), which is implemented in the program HTGEN, can be used for computing the semioptimal stripifications. Practical experiments confirm that one can obtain much better results using HTGEN than by a leading conventional stripification program FTSG (a reference stripification method not based on neural nets), although the running time of simulated annealing grows rapidly near the global optimum. Nevertheless, HTGEN exhibits empirical linear time complexity when the parameters of simulated annealing (i.e., the initial temperature and the stopping criterion) are fixed and thus provides the semioptimal offline solutions, even for huge models of hundreds of thousands of triangles, within a reasonable time.

Journal Articles

Publisher: Journals Gateway

*Neural Computation*(2005) 17 (12): 2635–2647.

Published: 01 December 2005

Abstract

View article
We study the computational complexity of training a single spiking neuron N with binary coded inputs and output that, in addition to adaptive weights and a threshold, has adjustable synaptic delays. A synchronization technique is introduced so that the results concerning the nonlearn-ability of spiking neurons with binary delays are generalized to arbitrary real-valued delays. In particular, the consistency problem for N with programmable weights, a threshold, and delays, and its approximation version are proven to be NP -complete. It follows that the spiking neurons with arbitrary synaptic delays are not properly PAC learnable and do not allow robust learning unless RP = NP . In addition, the representation problem for N , a question whether an n -variable Boolean function given in DNF (or as a disjunction of O ( n ) threshold gates) can be computed by a spiking neuron, is shown to be coNP -hard.

Journal Articles

Publisher: Journals Gateway

*Neural Computation*(2003) 15 (12): 2727–2778.

Published: 01 December 2003

Abstract

View article
We survey and summarize the literature on the computational aspects of neural network models by presenting a detailed taxonomy of the various models according to their complexity theoretic characteristics. The criteria of classification include the architecture of the network (feedforward versus recurrent), time model (discrete versus continuous), state type (binary versus analog), weight constraints (symmetric versus asymmetric), network size (finite nets versus infinite families), and computation type (deterministic versus probabilistic), among others. The underlying results concerning the computational power and complexity issues of perceptron, radial basis function, winner-take-all, and spiking neural networks are briefly surveyed, with pointers to the relevant literature. In our survey, we focus mainly on the digital computation whose inputs and outputs are binary in nature, although their values are quite often encoded as analog neuron states. We omit the important learning issues.

Journal Articles

Publisher: Journals Gateway

*Neural Computation*(2003) 15 (3): 693–733.

Published: 01 March 2003

Abstract

View article
We establish a fundamental result in the theory of computation by continuous-time dynamical systems by showing that systems corresponding to so-called continuous-time symmetric Hopfield nets are capable of general computation. As is well known, such networks have very constrained Lyapunov-function controlled dynamics. Nevertheless, we show that they are universal and efficient computational devices, in the sense that any convergent synchronous fully parallel computation by a recurrent network of n discrete-time binary neurons, with in general asymmetric coupling weights, can be simulated by a symmetric continuous-time Hopfield net containing only 18 n + 7 units employing the saturated-linear activation function. Moreover, if the asymmetric network has maximum integer weight size w max and converges in discrete time t *, then the corresponding Hopfield net can be designed to operate in continuous time Θ(t*/ɛ) for any ɛ > 0 such that w max 2 12n ≤ ɛ2 1/ɛ . In terms of standard discrete computation models, our result implies that any polynomially space-bounded Turing machine can be simulated by a family of polynomial-size continuous-time symmetric Hopfield nets.

Journal Articles

Publisher: Journals Gateway

*Neural Computation*(2002) 14 (11): 2709–2728.

Published: 01 November 2002

Abstract

View article
We first present a brief survey of hardness results for training feed forward neural networks. These results are then completed by the proof that the simplest architecture containing only a single neuron that applies a sigmoidal activation function σ: K → [α, β], satisfying certain natural axioms (e.g., the standard (logistic) sigmoid or saturated-linear function), to the weighted sum of n inputs is hard to train. In particular, the problem of finding the weights of such a unit that minimize the quadratic training error within (β—α) 2 or its average (over a training set) within—5(β—α) 2 / (12n ) of its infimum proves to be NP-hard. Hence, the well-known backpropagation learning algorithm appears not to be efficient even for one neuron, which has negative consequences in constructive learning.

Journal Articles

Publisher: Journals Gateway

*Neural Computation*(2000) 12 (12): 2965–2989.

Published: 01 December 2000

Abstract

View article
We investigate the computational properties of finite binary- and analog-state discrete-time symmetric Hopfield nets. For binary networks, we obtain a simulation of convergent asymmetric networks by symmetric networks with only a linear increase in network size and computation time. Then we analyze the convergence time of Hopfield nets in terms of the length of their bit representations. Here we construct an analog symmetric network whose convergence time exceeds the convergence time of any binary Hopfield net with the same representation length. Further, we prove that the MIN ENERGY problem for analog Hopfield nets is NP-hard and provide a polynomial time approximation algorithm for this problem in the case of binary nets. Finally, we show that symmetric analog nets with an external clock are computationally Turing universal.

Journal Articles

Publisher: Journals Gateway

*Neural Computation*(1994) 6 (5): 842–850.

Published: 01 September 1994

Abstract

View article
The loading problem formulated by J. S. Judd seems to be a relevant model for supervised connectionist learning of the feedforward networks from the complexity point of view. It is known that loading general network architectures is NP-complete (intractable) when the (training) tasks are also general. Many strong restrictions on architectural design and/or on the tasks do not help to avoid the intractability of loading. Judd concentrated on the width expanding architectures with constant depth and found a polynomial time algorithm for loading restricted shallow architectures. He suppressed the effect of depth on loading complexity and left as an open prototypical computational problem the loading of easy regular triangular architectures that might capture the crux of depth difficulties. We have proven this problem to be NP-complete. This result does not give much hope for the existence of an efficient algorithm for loading deep networks.