Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-8 of 8
Rufin VanRullen
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2024) 36 (4): 721–729.
Published: 01 April 2024
FIGURES
Abstract
View article
PDF
Brain oscillations are involved in many cognitive processes, and several studies have investigated their role in cognition. In particular, the phase of certain oscillations has been related to temporal binding and integration processes, with some authors arguing that perception could be an inherently rhythmic process. However, previous research on oscillations mostly overlooked their spatial component: how oscillations propagate through the brain as traveling waves, with systematic phase delays between brain regions. Here, we argue that interpreting oscillations as traveling waves is a useful paradigm shift to understand their role in temporal binding and address controversial results. After a brief definition of traveling waves, we propose an original view on temporal integration that considers this new perspective. We first focus on cortical dynamics, then speculate about the role of thalamic nuclei in modulating the waves, and on the possible consequences for rhythmic temporal binding. In conclusion, we highlight the importance of considering oscillations as traveling waves when investigating their role in cognitive functions.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2016) 28 (9): 1318–1330.
Published: 01 September 2016
FIGURES
Abstract
View article
PDF
Prior expectations have a powerful influence on perception, biasing both decision and confidence. However, how this occurs at the neural level remains unclear. It has been suggested that spontaneous alpha-band neural oscillations represent rhythms of the perceptual system that periodically modulate perceptual judgments. We hypothesized that these oscillations instantiate the effects of expectations. While collecting scalp EEG, participants performed a detection task that orthogonally manipulated perceptual expectations and attention. Trial-by-trial retrospective confidence judgments were also collected. Results showed that, independent of attention, prestimulus occipital alpha phase predicted the weighting of expectations on yes/no decisions. Moreover, phase predicted the influence of expectations on confidence. Thus, expectations periodically bias objective and subjective perceptual decision-making together before stimulus onset. Our results suggest that alpha-band neural oscillations periodically transmit prior evidence to visual cortex, changing the baseline from which evidence accumulation begins. In turn, our results inform accounts of how expectations shape early visual processing.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2016) 28 (6): 852–868.
Published: 01 June 2016
FIGURES
| View All (10)
Abstract
View article
PDF
Learning associations between co-occurring events enables us to extract structure from our environment. Medial-temporal lobe structures are critical for associative learning. However, the role of the ventral visual pathway (VVP) in associative learning is not clear. Do multivoxel object representations in the VVP reflect newly formed associations? We show that VVP multivoxel representations become more similar to each other after human participants learn arbitrary new associations between pairs of unrelated objects (faces, houses, cars, chairs). Participants were scanned before and after 15 days of associative learning. To evaluate how object representations changed, a classifier was trained on discriminating two nonassociated categories (e.g., faces/houses) and tested on discriminating their paired associates (e.g., cars/chairs). Because the associations were arbitrary and counterbalanced across participants, there was initially no particular reason for this cross-classification decision to tend toward either alternative. Nonetheless, after learning, cross-classification performance increased in the VVP (but not hippocampus), on average by 3.3%, with some voxels showing increases of up to 10%. For example, a chair multivoxel representation that initially resembled neither face nor house representations was, after learning, classified as more similar to that of faces for participants who associated chairs with faces and to that of houses for participants who associated chairs with houses. Additionally, learning produced long-lasting perceptual consequences. In a behavioral priming experiment performed several months later, the change in cross-classification performance was correlated with the degree of priming. Thus, VVP multivoxel representations are not static but become more similar to each other after associative learning.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2015) 27 (5): 945–958.
Published: 01 May 2015
FIGURES
| View All (7)
Abstract
View article
PDF
Visual search—finding a target element among similar-looking distractors—is one of the prevailing experimental methods to study attention. Current theories of visual search postulate an early stage of feature extraction interacting with an attentional process that selects candidate targets for further analysis; in difficult search situations, this selection is iterated until the target is found. Although such theories predict an intrinsic periodicity in the neuronal substrates of attentional search, this prediction has not been extensively tested in human electrophysiology. Here, using EEG and TMS, we study attentional periodicities in visual search. EEG measurements indicated that successful and unsuccessful search trials were associated with different amounts of poststimulus oscillatory amplitude and phase-locking at ∼6 Hz and opposite prestimulus oscillatory phase at ∼6 Hz. A trial-by-trial comparison of pre- and poststimulus ∼6 Hz EEG phases revealed that the functional interplay between prestimulus brain states, poststimulus oscillations, and successful search performance was mediated by a partial phase reset of ongoing oscillations. Independently, TMS applied over occipital cortex at various intervals after search onset demonstrated a periodic pattern of interference at ∼6 Hz. The converging evidence from independent TMS and EEG measurements demonstrates that attentional search is modulated periodically by brain oscillations. This periodicity is naturally compatible with a sequential exploration by attention, although a parallel but rhythmically modulated attention spotlight cannot be entirely ruled out.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2014) 26 (10): 2370–2384.
Published: 01 October 2014
FIGURES
Abstract
View article
PDF
Objects occupy space. How does the brain represent the spatial location of objects? Retinotopic early visual cortex has precise location information but can only segment simple objects. On the other hand, higher visual areas can resolve complex objects but only have coarse location information. Thus coarse location of complex objects might be represented by either (a) feedback from higher areas to early retinotopic areas or (b) coarse position encoding in higher areas. We tested these alternatives by presenting various kinds of first- (edge-defined) and second-order (texture) objects. We applied multivariate classifiers to the pattern of EEG amplitudes across the scalp at a range of time points to trace the temporal dynamics of coarse location representation. For edge-defined objects, peak classification performance was high and early and thus attributable to the retinotopic layout of early visual cortex. For texture objects, it was low and late. Crucially, despite these differences in peak performance and timing, training a classifier on one object and testing it on others revealed that the topography at peak performance was the same for both first- and second-order objects. That is, the same location information, encoded by early visual areas, was available for both edge-defined and texture objects at different time points. These results indicate that locations of complex objects such as textures, although not represented in the bottom–up sweep, are encoded later by neural patterns resembling the bottom–up ones. We conclude that feedback mechanisms play an important role in coarse location representation of complex objects.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2004) 16 (1): 4–14.
Published: 01 January 2004
Abstract
View article
PDF
Most theories of visual processing assume that a target will “pop out” from an array of distractors (“parallel” visual search, e.g., color or orientation discrimination) if targets and distractors can be discriminated without attention. When the discrimination requires attention (e.g., rotated L vs. T or red-green vs. green-red bisected disks), “serial” examination is needed in visual search. Attentional requirements are also frequently assessed by measuring interference from a concurrently performed attentionally demanding task. It is commonly believed that attention acts equivalently in dual-task and visual search paradigms, based on the implicit assumption that visual attentional requirements can be defined along a single dimension. Here we show that there is no such equivalence: We report on targets that do not trigger pop-out, even though they can be discriminated from distractors with attention occupied elsewhere (natural scenes, color-orientation conjunctions); conversely, we show that certain targets that pop out among distractors need undivided attention to be effectively discriminated from distractors when presented in isolation (rotated L vs. +, depth-rotated cubes). In other words, visual search and dual-task performance reveal attentional resources along two independent dimensions. We suggest an interpretation of these results in terms of neuronal selectivities and receptive field size effects.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2003) 15 (2): 209–217.
Published: 15 February 2003
Abstract
View article
PDF
The ventral visual pathway implements object recognition and categorization in a hierarchy of processing areas with neuronal selectivities of increasing complexity. The presence of massive feedback connections within this hierarchy raises the possibility that normal visual processing relies on the use of computational loops. It is not known, however, whether object recognition can be performed at all without such loops (i.e., in a purely feed-forward mode). By analyzing the time course of reaction times in a masked natural scene categorization paradigm, we show that the human visual system can generate selective motor responses based on a single feed-forward pass. We confirm these results using a more constrained letter discrimination task, in which the rapid succession of a target and mask is actually perceived as a distractor. We show that a masked stimulus presented for only 26 msec—and often not consciously perceived—can fully determine the earliest selective motor responses: The neural representations of the stimulus and mask are thus kept separated during a short period corresponding to the feed-forward “sweep.” Therefore, feedback loops do not appear to be “mandatory” for visual processing. Rather, we found that such loops allow the masked stimulus to reverberate in the visual system and affect behavior for nearly 150 msec after the feed-forward sweep.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2001) 13 (4): 454–461.
Published: 15 May 2001
Abstract
View article
PDF
Experiments investigating the mechanisms involved in visual processing often fail to separate low-level encoding mechanisms from higher-level behaviorally relevant ones. Using an alternating dual-task event-related potential (ERP) experimental paradigm (animals or vehicles categorization) where targets of one task are intermixed among distractors of the other, we show that visual categorization of a natural scene involves different mechanisms with different time courses: a perceptual, task-independent mechanism, followed by a task-related, category-independent process. Although average ERP responses reflect the visual category of the stimulus shortly after visual processing has begun (e.g. 75-80 msec), this difference is not correlated with the subject's behavior until 150 msec poststimulus.