Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-9 of 9
Rodney J. Douglas
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2018) 30 (5): 1359–1393.
Published: 01 May 2018
FIGURES
| View All (8)
Abstract
View article
PDF
Finding actions that satisfy the constraints imposed by both external inputs and internal representations is central to decision making. We demonstrate that some important classes of constraint satisfaction problems (CSPs) can be solved by networks composed of homogeneous cooperative-competitive modules that have connectivity similar to motifs observed in the superficial layers of neocortex. The winner-take-all modules are sparsely coupled by programming neurons that embed the constraints onto the otherwise homogeneous modular computational substrate. We show rules that embed any instance of the CSP's planar four-color graph coloring, maximum independent set, and sudoku on this substrate and provide mathematical proofs that guarantee these graph coloring problems will convergence to a solution. The network is composed of nonsaturating linear threshold neurons. Their lack of right saturation allows the overall network to explore the problem space driven through the unstable dynamics generated by recurrent excitation. The direction of exploration is steered by the constraint neurons. While many problems can be solved using only linear inhibitory constraints, network performance on hard problems benefits significantly when these negative constraints are implemented by nonlinear multiplicative inhibition. Overall, our results demonstrate the importance of instability rather than stability in network computation and offer insight into the computational role of dual inhibitory mechanisms in neural circuits.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2012) 24 (8): 2033–2052.
Published: 01 August 2012
FIGURES
| View All (7)
Abstract
View article
PDF
Models of cortical neuronal circuits commonly depend on inhibitory feedback to control gain, provide signal normalization, and selectively amplify signals using winner-take-all (WTA) dynamics. Such models generally assume that excitatory and inhibitory neurons are able to interact easily because their axons and dendrites are colocalized in the same small volume. However, quantitative neuroanatomical studies of the dimensions of axonal and dendritic trees of neurons in the neocortex show that this colocalization assumption is not valid. In this letter, we describe a simple modification to the WTA circuit design that permits the effects of distributed inhibitory neurons to be coupled through synchronization, and so allows a single WTA to be distributed widely in cortical space, well beyond the arborization of any single inhibitory neuron and even across different cortical areas. We prove by nonlinear contraction analysis and demonstrate by simulation that distributed WTA subsystems combined by such inhibitory synchrony are inherently stable. We show analytically that synchronization is substantially faster than winner selection. This circuit mechanism allows networks of independent WTAs to fully or partially compete with other.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2011) 23 (3): 735–773.
Published: 01 March 2011
FIGURES
| View All (10)
Abstract
View article
PDF
The neocortex has a remarkably uniform neuronal organization, suggesting that common principles of processing are employed throughout its extent. In particular, the patterns of connectivity observed in the superficial layers of the visual cortex are consistent with the recurrent excitation and inhibitory feedback required for cooperative-competitive circuits such as the soft winner-take-all (WTA). WTA circuits offer interesting computational properties such as selective amplification, signal restoration, and decision making. But these properties depend on the signal gain derived from positive feedback, and so there is a critical trade-off between providing feedback strong enough to support the sophisticated computations while maintaining overall circuit stability. The issue of stability is all the more intriguing when one considers that the WTAs are expected to be densely distributed through the superficial layers and that they are at least partially interconnected. We consider how to reason about stability in very large distributed networks of such circuits. We approach this problem by approximating the regular cortical architecture as many interconnected cooperative-competitive modules. We demonstrate that by properly understanding the behavior of this small computational module, one can reason over the stability and convergence of very large networks composed of these modules. We obtain parameter ranges in which the WTA circuit operates in a high-gain regime, is stable, and can be aggregated arbitrarily to form large, stable networks. We use nonlinear contraction theory to establish conditions for stability in the fully nonlinear case and verify these solutions using numerical simulations. The derived bounds allow modes of operation in which the WTA network is multistable and exhibits state-dependent persistent activities. Our approach is sufficiently general to reason systematically about the stability of any network, biological or technological, composed of networks of small modules that express competition through shared inhibition.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (6): 1399–1444.
Published: 01 June 2010
FIGURES
| View All (12)
Abstract
View article
PDF
We introduce a framework for decision making in which the learning of decision making is reduced to its simplest and biologically most plausible form: Hebbian learning on a linear neuron. We cast our Bayesian-Hebb learning rule as reinforcement learning in which certain decisions are rewarded and prove that each synaptic weight will on average converge exponentially fast to the log-odd of receiving a reward when its pre- and postsynaptic neurons are active. In our simple architecture, a particular action is selected from the set of candidate actions by a winner-take-all operation. The global reward assigned to this action then modulates the update of each synapse. Apart from this global reward signal, our reward-modulated Bayesian Hebb rule is a pure Hebb update that depends only on the coactivation of the pre- and postsynaptic neurons, not on the weighted sum of all presynaptic inputs to the postsynaptic neuron as in the perceptron learning rule or the Rescorla-Wagner rule. This simple approach to action-selection learning requires that information about sensory inputs be presented to the Bayesian decision stage in a suitably preprocessed form resulting from other adaptive processes (acting on a larger timescale) that detect salient dependencies among input features. Hence our proposed framework for fast learning of decisions also provides interesting new hypotheses regarding neural nodes and computational goals of cortical areas that provide input to the final decision stage.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2009) 21 (2): 478–509.
Published: 01 February 2009
FIGURES
| View All (10)
Abstract
View article
PDF
Although conditional branching between possible behavioral states is a hallmark of intelligent behavior, very little is known about the neuronal mechanisms that support this processing. In a step toward solving this problem, we demonstrate by theoretical analysis and simulation how networks of richly interconnected neurons, such as those observed in the superficial layers of the neocortex, can embed reliable, robust finite state machines. We show how a multistable neuronal network containing a number of states can be created very simply by coupling two recurrent networks whose synaptic weights have been configured for soft winner-take-all (sWTA) performance. These two sWTAs have simple, homogeneous, locally recurrent connectivity except for a small fraction of recurrent cross-connections between them, which are used to embed the required states. This coupling between the maps allows the network to continue to express the current state even after the input that elicited that state is withdrawn. In addition, a small number of transition neurons implement the necessary input-driven transitions between the embedded states. We provide simple rules to systematically design and construct neuronal state machines of this kind. The significance of our finding is that it offers a method whereby the cortex could construct networks supporting a broad range of sophisticated processing by applying only small specializations to the same generic neuronal circuit.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2002) 14 (7): 1669–1689.
Published: 01 July 2002
Abstract
View article
PDF
There is strong anatomical and physiological evidence that neurons with large receptive fields located in higher visual areas are recurrently connected to neurons with smaller receptive fields in lower areas. We have previously described a minimal neuronal network architecture in which top-down attentional signals to large receptive field neurons can bias and selectively read out the bottom-up sensory information to small receptive field neurons (Hahnloser, Douglas, Mahowald, & Hepp, 1999). Here we study an enhanced model, where the role of attention is to recruit specific inter-areal feedback loops (e.g., drive neurons above firing threshold). We first illustrate the operation of recruitment on a simple example of visual stimulus selection. In the subsequent analysis, we find that attentional recruitment operates by dynamical modulation of signal amplification and response multistability. In particular, we find that attentional stimulus selection necessitates increased recruitment when the stimulus to be selected is of small contrast and of small distance away from distractor stimuli. The selectability of a low-contrast stimulus is dependent on the gain of attentional effects; for example, low-contrast stimuli can be selected only when attention enhances neural responses. However, the dependence of attentional selection on stimulus-distractor distance is not contingent on whether attention enhances or suppresses responses. The computational implications of attentional recruitment are that cortical circuits can behave as winner-take-all mechanisms of variable strength and can achieve close to optimal signal discrimination in the presence of external noise.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1991) 3 (1): 19–30.
Published: 01 March 1991
Abstract
View article
PDF
We have used the morphology derived from single horseradish peroxidase-labeled neurons, known membrane conductance properties and microanatomy to construct a model neocortical network that exhibits synchronized bursting. The network was composed of interconnected pyramidal (excitatory) neurons with different intrinsic burst frequencies, and smooth (inhibitory) neurons that provided global feedback inhibition to all of the pyramids. When the network was activated by geniculocortical afferents the burst discharges of the pyramids quickly became synchronized with zero average phase-shift. The synchronization was strongly dependent on global feedback inhibition, which acted to group the coactivated bursts generated by intracortical reexcitation. Our results suggest that the synchronized bursting observed between cortical neurons responding to coherent visual stimuli is a simple consequence of the principles of intracortical connectivity.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1990) 2 (3): 283–292.
Published: 01 September 1990
Abstract
View article
PDF
We examine the effect of inhibition on the axon initial segment (AIS) by the chandelier (“axoaxonic”) cells, using a simplified compartmental model of actual pyramidal neurons from cat visual cortex. We show that within generally accepted ranges, inhibition at the AIS cannot completely prevent action potential discharge: only small amounts of excitatory synaptic current can be inhibited. Moderate amounts of excitatory current always result in action potential discharge, despite AIS inhibition. Inhibition of the somadendrite by basket cells enhances the effect of AIS inhibition and vice versa. Thus the axoaxonic cells may act synergistically with basket cells: the AIS inhibition increases the threshold for action potential discharge, the basket cells then control the suprathreshold discharge.
Journal Articles
Publisher: Journals Gateway
Neural Computation (1989) 1 (4): 480–488.
Published: 01 December 1989
Abstract
View article
PDF
We have used microanatomy derived from single neurons, and in vivo intracellular recordings to develop a simplified circuit of the visual cortex. The circuit explains the intracellular responses to pulse stimulation in terms of the interactions between three basic populations of neurons, and reveals the following features of cortical processing that are important to computational theories of neocortex. First, inhibition and excitation are not separable events. Activation of the cortex inevitably sets in motion a sequence of excitation and inhibition in every neuron. Second, the thalamic input does not provide the major excitation arriving at any neuron. Instead the intracortical excitatory connections provide most of the excitation. Third, the time evolution of excitation and inhibition is far longer than the synaptic delays of the circuits involved. This means that cortical processing cannot rely on precise timing between individual synaptic inputs.