Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-7 of 7
Micah M. Murray
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2019) 31 (3): 360–376.
Published: 01 March 2019
FIGURES
| View All (6)
Abstract
View article
PDF
Most evidence on the neural and perceptual correlates of sensory processing derives from studies that have focused on only a single sensory modality and averaged the data from groups of participants. Although valuable, such studies ignore the substantial interindividual and intraindividual differences that are undoubtedly at play. Such variability plays an integral role in both the behavioral/perceptual realms and in the neural correlates of these processes, but substantially less is known when compared with group-averaged data. Recently, it has been shown that the presentation of stimuli from two or more sensory modalities (i.e., multisensory stimulation) not only results in the well-established performance gains but also gives rise to reductions in behavioral and neural response variability. To better understand the relationship between neural and behavioral response variability under multisensory conditions, this study investigated both behavior and brain activity in a task requiring participants to discriminate moving versus static stimuli presented in either a unisensory or multisensory context. EEG data were analyzed with respect to intraindividual and interindividual differences in RTs. The results showed that trial-by-trial variability of RTs was significantly reduced under audiovisual presentation conditions as compared with visual-only presentations across all participants. Intraindividual variability of RTs was linked to changes in correlated activity between clusters within an occipital to frontal network. In addition, interindividual variability of RTs was linked to differential recruitment of medial frontal cortices. The present findings highlight differences in the brain networks that support behavioral benefits during unisensory versus multisensory motion detection and provide an important view into the functional dynamics within neuronal networks underpinning intraindividual performance differences.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2019) 31 (3): 412–430.
Published: 01 March 2019
FIGURES
| View All (6)
Abstract
View article
PDF
In real-world environments, information is typically multisensory, and objects are a primary unit of information processing. Object recognition and action necessitate attentional selection of task-relevant from among task-irrelevant objects. However, the brain and cognitive mechanisms governing these processes remain not well understood. Here, we demonstrate that attentional selection of visual objects is controlled by integrated top–down audiovisual object representations (“attentional templates”) while revealing a new brain mechanism through which they can operate. In multistimulus (visual) arrays, attentional selection of objects in humans and animal models is traditionally quantified via “the N2pc component”: spatially selective enhancements of neural processing of objects within ventral visual cortices at approximately 150–300 msec poststimulus. In our adaptation of Folk et al.'s [Folk, C. L., Remington, R. W., & Johnston, J. C. Involuntary covert orienting is contingent on attentional control settings. Journal of Experimental Psychology: Human Perception and Performance, 18, 1030–1044, 1992] spatial cueing paradigm, visual cues elicited weaker behavioral attention capture and an attenuated N2pc during audiovisual versus visual search. To provide direct evidence for the brain, and so, cognitive, mechanisms underlying top–down control in multisensory search, we analyzed global features of the electrical field at the scalp across our N2pcs. In the N2pc time window (170–270 msec), color cues elicited brain responses differing in strength and their topography. This latter finding is indicative of changes in active brain sources. Thus, in multisensory environments, attentional selection is controlled via integrated top–down object representations, and so not only by separate sensory-specific top–down feature templates (as suggested by traditional N2pc analyses). We discuss how the electrical neuroimaging approach can aid research on top–down attentional control in naturalistic, multisensory settings and on other neurocognitive functions in the growing area of real-world neuroscience.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2013) 25 (7): 1122–1135.
Published: 01 July 2013
FIGURES
| View All (4)
Abstract
View article
PDF
Approaching or looming sounds (L-sounds) have been shown to selectively increase visual cortex excitability [Romei, V., Murray, M. M., Cappe, C., & Thut, G. Preperceptual and stimulus-selective enhancement of low-level human visual cortex excitability by sounds. Current Biology, 19, 1799–1805, 2009]. These cross-modal effects start at an early, preperceptual stage of sound processing and persist with increasing sound duration. Here, we identified individual factors contributing to cross-modal effects on visual cortex excitability and studied the persistence of effects after sound offset. To this end, we probed the impact of different L-sound velocities on phosphene perception postsound as a function of individual auditory versus visual preference/dominance using single-pulse TMS over the occipital pole. We found that the boosting of phosphene perception by L-sounds continued for several tens of milliseconds after the end of the L-sound and was temporally sensitive to different L-sound profiles (velocities). In addition, we found that this depended on an individual's preferred sensory modality (auditory vs. visual) as determined through a divided attention task (attentional preference), but not on their simple threshold detection level per sensory modality. Whereas individuals with “visual preference” showed enhanced phosphene perception irrespective of L-sound velocity, those with “auditory preference” showed differential peaks in phosphene perception whose delays after sound-offset followed the different L-sound velocity profiles. These novel findings suggest that looming signals modulate visual cortex excitability beyond sound duration possibly to support prompt identification and reaction to potentially dangerous approaching objects. The observed interindividual differences favor the idea that unlike early effects this late L-sound impact on visual cortex excitability is influenced by cross-modal attentional mechanisms rather than low-level sensory processes.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2012) 24 (6): 1331–1343.
Published: 01 June 2012
FIGURES
Abstract
View article
PDF
Optimal behavior relies on flexible adaptation to environmental requirements, notably based on the detection of errors. The impact of error detection on subsequent behavior typically manifests as a slowing down of RTs following errors. Precisely how errors impact the processing of subsequent stimuli and in turn shape behavior remains unresolved. To address these questions, we used an auditory spatial go/no-go task where continual feedback informed participants of whether they were too slow. We contrasted auditory-evoked potentials to left-lateralized go and right no-go stimuli as a function of performance on the preceding go stimuli, generating a 2 × 2 design with “preceding performance” (fast hit [FH], slow hit [SH]) and stimulus type (go, no-go) as within-subject factors. SH trials yielded SH trials on the following trials more often than did FHs, supporting our assumption that SHs engaged effects similar to errors. Electrophysiologically, auditory-evoked potentials modulated topographically as a function of preceding performance 80–110 msec poststimulus onset and then as a function of stimulus type at 110–140 msec, indicative of changes in the underlying brain networks. Source estimations revealed a stronger activity of prefrontal regions to stimuli after successful than error trials, followed by a stronger response of parietal areas to the no-go than go stimuli. We interpret these results in terms of a shift from a fast automatic to a slow controlled form of inhibitory control induced by the detection of errors, manifesting during low-level integration of task-relevant features of subsequent stimuli, which in turn influences response speed.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2010) 22 (12): 2850–2863.
Published: 01 December 2010
FIGURES
| View All (5)
Abstract
View article
PDF
Multisensory stimuli can improve performance, facilitating RTs on sensorimotor tasks. This benefit is referred to as the redundant signals effect (RSE) and can exceed predictions on the basis of probability summation, indicative of integrative processes. Although an RSE exceeding probability summation has been repeatedly observed in humans and nonprimate animals, there are scant and inconsistent data from nonhuman primates performing similar protocols. Rather, existing paradigms have instead focused on saccadic eye movements. Moreover, the extant results in monkeys leave unresolved how stimulus synchronicity and intensity impact performance. Two trained monkeys performed a simple detection task involving arm movements to auditory, visual, or synchronous auditory–visual multisensory pairs. RSEs in excess of predictions on the basis of probability summation were observed and thus forcibly follow from neural response interactions. Parametric variation of auditory stimulus intensity revealed that in both animals, RT facilitation was limited to situations where the auditory stimulus intensity was below or up to 20 dB above perceptual threshold, despite the visual stimulus always being suprathreshold. No RT facilitation or even behavioral costs were obtained with auditory intensities 30–40 dB above threshold. The present study demonstrates the feasibility and the suitability of behaving monkeys for investigating links between psychophysical and neurophysiologic instantiations of multisensory interactions.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2009) 21 (1): 105–118.
Published: 01 January 2009
Abstract
View article
PDF
Using event-related potentials (ERPs), we investigated the neural response associated with preparing to switch from one task to another. We used a cued task-switching paradigm in which the interval between the cue and the imperative stimulus was varied. The difference between response time (RT) to trials on which the task switched and trials on which the task repeated (switch cost) decreased as the interval between cue and target (CTI) was increased, demonstrating that subjects used the CTI to prepare for the forthcoming task. However, the RT on repeated-task trials in blocks during which the task could switch (mixed-task blocks) were never as short as RTs during single-task blocks (mixing cost). This replicates previous research. The ERPs in response to the cue were compared across three conditions: single-task trials, switch trials, and repeat trials. ERP topographic differences were found between single-task trials and mixed-task (switch and repeat) trials at ∼160 and ∼310 msec after the cue, indicative of changes in the underlying neural generator configuration as a basis for the mixing cost. In contrast, there were no topographic differences evident between switch and repeat trials during the CTI. Rather, the response of statistically indistinguishable generator configurations was stronger at ∼310 msec on switch than on repeat trials. By separating differences in ERP topography from differences in response strength, these results suggest that a reappraisal of previous research is appropriate.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2000) 12 (4): 615–621.
Published: 01 July 2000
Abstract
View article
PDF
Object recognition is achieved even in circumstances when only partial information is available to the observer. Perceptual closure processes are essential in enabling such recognitions to occur. We presented successively less fragmented images while recording high-density event-related potentials (ERPs), which permitted us to monitor brain activity during the perceptual closure processes leading up to object recognition. We reveal a bilateral ERP component (N cl ) that tracks these processes (onsets ∼ 230 msec, maximal at ∼290 msec). Scalp-current density mapping of the N cl revealed bilateral occipito-temporal scalp foci, which are consistent with generators in the human ventral visual stream, and specifically the lateral-occipital or LO complex as defined by hemodynamic studies of object recognition.