Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-7 of 7
Kenneth D. Miller
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2013) 25 (8): 1994–2037.
Published: 01 August 2013
FIGURES
| View All (51)
Abstract
View article
PDF
We study a rate-model neural network composed of excitatory and inhibitory neurons in which neuronal input-output functions are power laws with a power greater than 1, as observed in primary visual cortex. This supralinear input-output function leads to supralinear summation of network responses to multiple inputs for weak inputs. We show that for stronger inputs, which would drive the excitatory subnetwork to instability, the network will dynamically stabilize provided feedback inhibition is sufficiently strong. For a wide range of network and stimulus parameters, this dynamic stabilization yields a transition from supralinear to sublinear summation of network responses to multiple inputs. We compare this to the dynamic stabilization in the balanced network, which yields only linear behavior. We more exhaustively analyze the two-dimensional case of one excitatory and one inhibitory population. We show that in this case, dynamic stabilization will occur whenever the determinant of the weight matrix is positive and the inhibitory time constant is sufficiently small, and analyze the conditions for supersaturation, or decrease of firing rates with increasing stimulus contrast (which represents increasing input firing rates). In work to be presented elsewhere, we have found that this transition from supralinear to sublinear summation can explain a wide variety of nonlinearities in cerebral cortical processing.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2012) 24 (1): 25–31.
Published: 01 January 2012
Abstract
View article
PDF
We demonstrate the mathematical equivalence of two commonly used forms of firing rate model equations for neural networks. In addition, we show that what is commonly interpreted as the firing rate in one form of model may be better interpreted as a low-pass-filtered firing rate, and we point out a conductance-based firing rate model.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1998) 10 (3): 529–547.
Published: 01 April 1998
Abstract
View article
PDF
A simple model of correlation-based synaptic plasticity via axonal sprouting and retraction (Elliott, Howarth, & Shadbolt, 1996a) is shown to be equivalent to the class of correlation-based models (Miller, Keller, & Stryker, 1989), although these were formulated in terms of weight modification of anatomically fixed synapses. Both models maximize the same measure of synaptic correlation, subject to certain constraints on connectivity. Thus, the analyses of the correlation-based models suffice to characterize the behavior of the sprouting-and-retraction model. More detailed models are needed for theoretical distinctions to be drawn between plasticity via sprouting and retraction, weight modification, or a combination. The model of Elliott et al. involves stochastic search through allowed weight patterns for those that improve correlations. That of Miller et alinstead follows dynamical equations that determine continuous changes of the weights that improve correlations. The identity of these two approaches is shown to depend on the use of subtractive constraint enforcement in the models of Miller et al. More generally, to model the idea that neural development acts to maximize some measure of correlation subject to a constraint on the summed synaptic weight, the constraint must be enforced subtractively in a dynamical model.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1997) 9 (5): 971–983.
Published: 01 July 1997
Abstract
View article
PDF
To understand the interspike interval (ISI) variability displayed by visual cortical neurons (Softky & Koch, 1993), it is critical to examine the dynamics of their neuronal integration, as well as the variability in their synaptic input current. Most previous models have focused on the latter factor. We match a simple integrate-and-fire model to the experimentally measured integrative properties of cortical regular spiking cells (McCormick, Connors, Lighthall, & Prince, 1985). After setting RC parameters, the postspike voltage reset is set to match experimental measurements of neuronal gain (obtained from in vitro plots of firing frequency versus injected current). Examination of the resulting model leads to an intuitive picture of neuronal integration that unifies the seemingly contradictory and random walk pictures that have previously been proposed. When ISIs are dominated by postspike recovery, arguments hold and spiking is regular; after the “memory” of the last spike becomes negligible, spike threshold crossing is caused by input variance around a steady state and spiking is Poisson. In integrate-and-fire neurons matched to cortical cell physiology, steady-state behavior is predominant, and ISIs are highly variable at all physiological firing rates and for a wide range of inhibitory and excitatory inputs.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1994) 6 (1): 100–126.
Published: 01 January 1994
Abstract
View article
PDF
Models of unsupervised, correlation-based (Hebbian) synaptic plasticity are typically unstable: either all synapses grow until each reaches the maximum allowed strength, or all synapses decay to zero strength. A common method of avoiding these outcomes is to use a constraint that conserves or limits the total synaptic strength over a cell. We study the dynamic effects of such constraints. Two methods of enforcing a constraint are distinguished, multiplicative and subtractive. For otherwise linear learning rules, multiplicative enforcement of a constraint results in dynamics that converge to the principal eigenvector of the operator determining unconstrained synaptic development. Subtractive enforcement, in contrast, typically leads to a final state in which almost all synaptic strengths reach either the maximum or minimum allowed value. This final state is often dominated by weight configurations other than the principal eigenvector of the unconstrained operator. Multiplicative enforcement yields a “graded” receptive field in which most mutually correlated inputs are represented, whereas subtractive enforcement yields a receptive field that is “sharpened” to a subset of maximally correlated inputs. If two equivalent input populations (e.g., two eyes) innervate a common target, multiplicative enforcement prevents their segregation (ocular dominance segregation) when the two populations are weakly correlated; whereas subtractive enforcement allows segregation under these circumstances. These results may be used to understand constraints both over output cells and over input cells. A variety of rules that can implement constrained dynamics are discussed.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1990) 2 (3): 321–333.
Published: 01 September 1990
Abstract
View article
PDF
A linear Hebbian equation for synaptic plasticity is derived from a more complex, nonlinear model by considering the initial development of the difference between two equivalent excitatory projections. This provides a justification for the use of such a simple equation to model activity-dependent neural development and plasticity, and allows analysis of the biological origins of the terms in the equation. Connections to previously published models are discussed.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1990) 2 (2): 173–187.
Published: 01 June 1990
Abstract
View article
PDF
Linsker has reported the development of center-surround receptive fields and oriented receptive fields in simulations of a Hebb-type equation in a linear network. The dynamics of the learning rule are analyzed in terms of the eigenvectors of the covariance matrix of cell activities. Analytic and computational results for Linsker's covariance matrices, and some general theorems, lead to an explanation of the emergence of center-surround and certain oriented structures. We estimate criteria for the parameter regime in which center-surround structures emerge.