Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-5 of 5
Helge Ritter
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2001) 13 (8): 1811–1825.
Published: 01 August 2001
Abstract
View article
PDF
We establish two conditions that ensure the nondivergence of additive recurrent networks with unsaturating piecewise linear transfer functions, also called linear threshold or semilinear transfer functions. As Hahn-loser, Sarpeshkar, Mahowald, Douglas, and Seung (2000) showed, networks of this type can be efficiently built in silicon and exhibit the coexistence of digital selection and analog amplification in a single circuit. To obtain this behavior, the network must be multistable and nondivergent, and our conditions allow determining the regimes where this can be achieved with maximal recurrent amplification. The first condition can be applied to nonsymmetric networks and has a simple interpretation of requiring that the strength of local inhibition match the sum over excitatory weights converging onto a neuron. The second condition is restricted to symmetric networks, but can also take into account the stabilizing effect of nonlocal inhibitory interactions. We demonstrate the application of the conditions on a simple example and the orientation-selectivity model of Ben-Yishai, Lev Bar-Or, and Sompolinsky (1995). We show that the conditions can be used to identify in their model regions of maximal orientation-selective amplification and symmetry breaking.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2001) 13 (2): 357–387.
Published: 01 February 2001
Abstract
View article
PDF
We present a recurrent neural network for feature binding and sensory segmentation: the competitive-layer model (CLM). The CLM uses topo-graphically structured competitive and cooperative interactions in a layered network to partition a set of input features into salient groups. The dynamics is formulated within a standard additive recurrent network with linear threshold neurons. Contextual relations among features are coded by pairwise compatibilities, which define an energy function to be minimized by the neural dynamics. Due to the usage of dynamical winner-take-all circuits, the model gains more flexible response properties than spin models of segmentation by exploiting amplitude information in the grouping process. We prove analytic results on the convergence and stable attractors of the CLM, which generalize earlier results on winner-take-all networks, and incorporate deterministic annealing for robustness against local minima. The piecewise linear dynamics of the CLM allows a linear eigensubspace analysis, which we use to analyze the dynamics of binding in conjunction with annealing. For the example of contour detection, we show how the CLM can integrate figure-ground segmentation and grouping into a unified model.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2001) 13 (2): 453–475.
Published: 01 February 2001
Abstract
View article
PDF
In the domain of unsupervised learning, mixtures of gaussians have become a popular tool for statistical modeling. For this class of generative models, we present a complexity control scheme, which provides an effective means for avoiding the problem of overfitting usually encountered with unconstrained (mixtures of) gaussians in high dimensions. According to some prespecified level of resolution as implied by a fixed variance noise model, the scheme provides an automatic selection of the dimensionalities of some local signal subspaces by maximum likelihood estimation. Together with a resolution-based control scheme for adjusting the number of mixture components, we arrive at an incremental model refinement procedure within a common deterministic annealing framework, which enables an efficient exploration of the model space. The advantages of the resolution-based framework are illustrated by experimental results on synthetic and high-dimensional real-world data.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1997) 9 (5): 959–970.
Published: 01 July 1997
Abstract
View article
PDF
Correlation-based learning (CBL) has been suggested as the mechanism that underlies the development of simple-cell receptive fields in the primary visual cortex of cats, including orientation preference (OR) and ocular dominance (OD) (Linsker, 1986; Miller, Keller, & Stryker, 1989). CBL has been applied successfully to the development of OR and OD individually (Miller, Keller, & Stryker, 1989; Miller, 1994; Miyashita & Tanaka, 1991; Erwin, Obermayer, & Schulten, 1995), but the conditions for their joint development have not been studied (but see Erwin & Miller, 1995, for independent work on the same question) in contrast to competitive Hebbian models (Obermayer, Blasdel, & Schulten, 1992). In this article, we provide insight into why this has been the case: OR and OD decouple in symmetric CBL models, and a joint development of OR and OD is possible only in a parameter regime that depends on nonlinear mechanisms.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1996) 8 (7): 1521–1539.
Published: 01 October 1996
Abstract
View article
PDF
Incrementally constructed cascade architectures are a promising alternative to networks of predefined size. This paper compares the direct cascade architecture (DCA) proposed in Littmann and Ritter (1992) to the cascade-correlation approach of Fahlman and Lebiere (1990) and to related approaches and discusses the properties on the basis of various benchmark results. One important virtue of DCA is that it allows the cascading of entire subnetworks , even if these admit no error-backpropagation. Exploiting this flexibility and using LLM networks as cascaded elements, we show that the performance of the resulting network cascades can be greatly enhanced compared to the performance of a single network. Our results for the Mackey-Glass time series prediction task indicate that such deeply cascaded network architectures achieve good generalization even on small data sets, when shallow, broad architectures of comparable size suffer from overfitting. We conclude that the DCA approach offers a powerful and flexible alternative to existing schemes such as, e.g., the mixtures of experts approach, for the construction of modular systems from a wide range of subnetwork types.