Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
Phil Husbands
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neural Computation (2022) 34 (3): 686–715.
Published: 17 February 2022
Abstract
View article
PDF
A growing body of work has demonstrated the importance of ongoing oscillatory neural activity in sensory processing and the generation of sensorimotor behaviors. It has been shown, for several different brain areas, that sensory-evoked neural oscillations are generated from the modulation by sensory inputs of inherent self-sustained neural activity (SSA). This letter contributes to that strand of research by introducing a methodology to investigate how much of the sensory-evoked oscillatory activity is generated by SSA and how much is generated by sensory inputs within the context of sensorimotor behavior in a computational model. We develop an abstract model consisting of a network of three Kuramoto oscillators controlling the behavior of a simulated agent performing a categorical perception task. The effects of sensory inputs and SSAs on sensory-evoked oscillations are quantified by the cross product of velocity vectors in the phase space of the network under different conditions (disconnected without input, connected without input, and connected with input). We found that while the agent is carrying out the task, sensory-evoked activity is predominantly generated by SSA (93.10%) with much less influence from sensory inputs (6.90%). Furthermore, the influence of sensory inputs can be reduced by 10.4% (from 6.90% to 6.18%) with a decay in the agent's performance of only 2%. A dynamical analysis shows how sensory-evoked oscillations are generated from a dynamic coupling between the level of sensitivity of the network and the intensity of the input signals. This work may suggest interesting directions for neurophysiological experiments investigating how self-sustained neural activity influences sensory input processing, and ultimately affects behavior.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2013) 25 (11): 2934–2975.
Published: 01 November 2013
FIGURES
| View All (26)
Abstract
View article
PDF
The dynamic formation of groups of neurons—neuronal assemblies—is believed to mediate cognitive phenomena at many levels, but their detailed operation and mechanisms of interaction are still to be uncovered. One hypothesis suggests that synchronized oscillations underpin their formation and functioning, with a focus on the temporal structure of neuronal signals. In this context, we investigate neuronal assembly dynamics in two complementary scenarios: the first, a supervised spike pattern classification task, in which noisy variations of a collection of spikes have to be correctly labeled; the second, an unsupervised, minimally cognitive evolutionary robotics tasks, in which an evolved agent has to cope with multiple, possibly conflicting, objectives. In both cases, the more traditional dynamical analysis of the system's variables is paired with information-theoretic techniques in order to get a broader picture of the ongoing interactions with and within the network. The neural network model is inspired by the Kuramoto model of coupled phase oscillators and allows one to fine-tune the network synchronization dynamics and assembly configuration. The experiments explore the computational power, redundancy, and generalization capability of neuronal circuits, demonstrating that performance depends nonlinearly on the number of assemblies and neurons in the network and showing that the framework can be exploited to generate minimally cognitive behaviors, with dynamic assembly formation accounting for varying degrees of stimuli modulation of the sensorimotor interactions.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2012) 24 (8): 2185–2222.
Published: 01 August 2012
FIGURES
| View All (25)
Abstract
View article
PDF
We present a general and fully dynamic neural system, which exploits intrinsic chaotic dynamics, for the real-time goal-directed exploration and learning of the possible locomotion patterns of an articulated robot of an arbitrary morphology in an unknown environment. The controller is modeled as a network of neural oscillators that are initially coupled only through physical embodiment, and goal-directed exploration of coordinated motor patterns is achieved by chaotic search using adaptive bifurcation. The phase space of the indirectly coupled neural-body-environment system contains multiple transient or permanent self-organized dynamics, each of which is a candidate for a locomotion behavior. The adaptive bifurcation enables the system orbit to wander through various phase-coordinated states, using its intrinsic chaotic dynamics as a driving force, and stabilizes on to one of the states matching the given goal criteria. In order to improve the sustainability of useful transient patterns, sensory homeostasis has been introduced, which results in an increased diversity of motor outputs, thus achieving multiscale exploration. A rhythmic pattern discovered by this process is memorized and sustained by changing the wiring between initially disconnected oscillators using an adaptive synchronization method. Our results show that the novel neurorobotic system is able to create and learn multiple locomotion behaviors for a wide range of body configurations and physical environments and can readapt in realtime after sustaining damage.
Journal Articles
Publisher: Journals Gateway
Neural Computation (2010) 22 (8): 2059–2085.
Published: 01 August 2010
FIGURES
| View All (7)
Abstract
View article
PDF
Rate-coded Hebbian learning, as characterized by the BCM formulation, is an established computational model of synaptic plasticity. Recently it has been demonstrated that changes in the strength of synapses in vivo can also depend explicitly on the relative timing of pre- and postsynaptic firing. Computational modeling of this spike-timing-dependent plasticity (STDP) has demonstrated that it can provide inherent stability or competition based on local synaptic variables. However, it has also been demonstrated that these properties rely on synaptic weights being either depressed or unchanged by an increase in mean stochastic firing rates, which directly contradicts empirical data. Several analytical studies have addressed this apparent dichotomy and identified conditions under which distinct and disparate STDP rules can be reconciled with rate-coded Hebbian learning. The aim of this research is to verify, unify, and expand on these previous findings by manipulating each element of a standard computational STDP model in turn. This allows us to identify the conditions under which this plasticity rule can replicate experimental data obtained using both rate and temporal stimulation protocols in a spiking recurrent neural network. Our results describe how the relative scale of mean synaptic weights and their dependence on stochastic pre- or postsynaptic firing rates can be manipulated by adjusting the exact profile of the asymmetric learning window and temporal restrictions on spike pair interactions respectively. These findings imply that previously disparate models of rate-coded autoassociative learning and temporally coded heteroassociative learning, mediated by symmetric and asymmetric connections respectively, can be implemented in a single network using a single plasticity rule. However, we also demonstrate that forms of STDP that can be reconciled with rate-coded Hebbian learning do not generate inherent synaptic competition, and thus some additional mechanism is required to guarantee long-term input-output selectivity.