Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-11 of 11
Bruno Rossion
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2021) 33 (11): 2372–2393.
Published: 01 October 2021
FIGURES
| View All (8)
Abstract
View article
PDF
In the approach of frequency tagging, stimuli that are presented periodically generate periodic responses of the brain. Following a transformation into the frequency domain, the brain's response is often evident at the frequency of stimulation, F , and its higher harmonics (2 F , 3 F , etc.). This approach is increasingly used in neuroscience, as it affords objective measures to characterize brain function. However, whether these specific harmonic frequency responses should be combined for analysis—and if so, how—remains an outstanding issue. In most studies, higher harmonic responses have not been described or were described only individually; in other studies, harmonics have been combined with various approaches, for example, averaging and root-mean-square summation. A rationale for these approaches in the context of frequency-based analysis principles and an understanding of how they relate to the brain's response amplitudes in the time domain have been missing. Here, with these elements addressed, the summation of (baseline-corrected) harmonic amplitude is recommended.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2018) 30 (4): 449–467.
Published: 01 April 2018
FIGURES
| View All (7)
Abstract
View article
PDF
Human adults have a rich visual experience thanks to seeing human faces since birth, which may contribute to the acquisition of perceptual processes that rapidly and automatically individuate faces. According to a generic visual expertise hypothesis, extensive experience with nonface objects may similarly lead to efficient processing of objects at the individual level. However, whether extensive training in adulthood leads to visual expertise remains debated. One key issue is the extent to which the acquisition of visual expertise depends on the resemblance of objects to faces in terms of the spatial configuration of parts. We therefore trained naive human adults to individuate a large set of novel parametric multipart objects. Critically, one group of participants trained with the objects in a “facelike” stimulus orientation, whereas a second group trained with the same objects but with the objects rotated 180° in the picture plane into a “nonfacelike” orientation. We used a fast periodic visual stimulation EEG protocol to objectively quantify participants' ability to discriminate untrained exemplars before and after training. EEG responses associated with the frequency of identity change in a fast stimulation sequence, which reflects rapid and automatic perceptual processes, were observed over lateral occipital sites for both groups before training. There was a significant, albeit small, increase in these responses after training but only for the facelike group and only to facelike stimuli. Our findings indicate that perceived facelikeness plays a role in visual expertise and highlight how the adult perceptual system exploits familiar spatial configurations when learning new object categories.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2018) 30 (3): 393–410.
Published: 01 March 2018
FIGURES
| View All (10)
Abstract
View article
PDF
In daily life, efficient perceptual categorization of faces occurs in dynamic and highly complex visual environments. Yet the role of selective attention in guiding face categorization has predominantly been studied under sparse and static viewing conditions, with little focus on disentangling the impact of attentional enhancement and suppression. Here we show that attentional enhancement and suppression exert a differential impact on face categorization supported by the left and right hemispheres. We recorded 128-channel EEG while participants viewed a 6-Hz stream of object images (buildings, animals, objects, etc.) with a face image embedded as every fifth image (i.e., OOOOFOOOOFOOOOF…). We isolated face-selective activity by measuring the response at the face presentation frequency (i.e., 6 Hz/5 = 1.2 Hz) under three conditions: Attend Faces, in which participants monitored the sequence for instances of female faces; Attend Objects, in which they responded to instances of guitars; and Baseline, in which they performed an orthogonal task on the central fixation cross. During the orthogonal task, face-specific activity was predominantly centered over the right occipitotemporal region. Actively attending to faces enhanced face-selective activity much more evidently in the left hemisphere than in the right, whereas attending to objects suppressed the face-selective response in both hemispheres to a comparable extent. In addition, the time courses of attentional enhancement and suppression did not overlap. These results suggest the left and right hemispheres support face-selective processing in distinct ways—where the right hemisphere is mandatorily engaged by faces and the left hemisphere is more flexibly recruited to serve current tasks demands.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2017) 29 (8): 1368–1377.
Published: 01 August 2017
FIGURES
| View All (5)
Abstract
View article
PDF
A growing body of literature suggests that human individuals differ in their ability to process face identity. These findings mainly stem from explicit behavioral tasks, such as the Cambridge Face Memory Test (CFMT). However, it remains an open question whether such individual differences can be found in the absence of an explicit face identity task and when faces have to be individualized at a single glance. In the current study, we tested 49 participants with a recently developed fast periodic visual stimulation (FPVS) paradigm [Liu-Shuang, J., Norcia, A. M., & Rossion, B. An objective index of individual face discrimination in the right occipitotemporal cortex by means of fast periodic oddball stimulation. Neuropsychologia, 52, 57–72, 2014] in EEG to rapidly, objectively, and implicitly quantify face identity processing. In the FPVS paradigm, one face identity (A) was presented at the frequency of 6 Hz, allowing only one gaze fixation, with different face identities (B, C, D) presented every fifth face (1.2 Hz; i.e., AAAABAAAACAAAAD…). Results showed a face individuation response at 1.2 Hz and its harmonics, peaking over occipitotemporal locations. The magnitude of this response showed high reliability across different recording sequences and was significant in all but two participants, with the magnitude and lateralization differing widely across participants. There was a modest but significant correlation between the individuation response amplitude and the performance of the behavioral CFMT task, despite the fact that CFMT and FPVS measured different aspects of face identity processing. Taken together, the current study highlights the FPVS approach as a promising means for studying individual differences in face identity processing.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2014) 26 (1): 81–95.
Published: 01 January 2014
FIGURES
| View All (8)
Abstract
View article
PDF
Recognizing a familiar face rapidly is a fundamental human brain function. Here we used scalp EEG to determine the minimal time needed to classify a face as personally familiar or unfamiliar. Go (familiar) and no-go (unfamiliar) responses elicited clear differential waveforms from 210 msec onward, this difference being first observed at right occipito-temporal electrode sites. Similar but delayed (by about 40 msec) responses were observed when go response were required to the unfamiliar rather than familiar faces, in a second group of participants. In both groups, a small increase of amplitude was also observed on the right hemisphere N170 face-sensitive component for familiar faces. However, unlike the post-200 msec differential go/no-go effect, this effect was unrelated to behavior and disappeared with repetition of unfamiliar faces. These observations indicate that accumulation of evidence within the first 200 msec poststimulus onset is sufficient for the human brain to decide whether a person is familiar based on his or her face, a time frame that puts strong constraints on the time course of face processing.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2010) 22 (3): 526–542.
Published: 01 March 2010
FIGURES
| View All (7)
Abstract
View article
PDF
One remarkable aspect of the human motor repertoire is the multitude of bimanual actions it contains. Still, the neural correlates of coordinated movements, in which the two hands share a common goal, remain debated. To address this issue, we designed two bimanual circling tasks that differed only in terms of goal conceptualization: a “coordination” task that required movements of both hands to adapt to each other to reach a common goal and an “independent” task that imposed a separate goal to each hand. fMRI allowed us to pinpoint three areas located in the right hemisphere that were more strongly activated in the coordination condition: the superior temporal gyrus (STG), the SMA, and the primary motor cortex (M1). We then used transcranial magnetic stimulation (TMS) to disrupt transiently the function of those three regions to determine their causal role in bimanual coordination. Right STG virtual lesions impaired bimanual coordination, whereas TMS to right M1 enhanced hand independence. TMS over SMA, left STG, or left M1 had no effect. The present study provides direct insight into the neural correlates of coordinated bimanual movements and highlights the role of right STG in such bimanual movements.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2008) 20 (7): 1283–1299.
Published: 01 July 2008
Abstract
View article
PDF
Adults can decide rapidly if a string of letters is a word or not. However, the exact time course of this discrimination is still an open question. Here we sought to track the time course of this discrimination and to determine how orthographic information—letter position and letter identity—is computed during reading. We used a go/no-go lexical decision task while recording event-related potentials (ERPs). Subjects were presented with single words (go trials) and pseudowords (no-go trials), which varied in orthographic conformation, presenting either a double consonant frequently doubled (i.e., “ss”) or never doubled (i.e., “zz”) (identity factor); and a position of the double consonant was which either legal or illegal (position factor), in a 2 × 2 factorial design. Words and pseudowords clearly differed as early as 230 msec. At this latency, ERP waveforms were modulated both by the identity and by the position of letters: The fronto-central no-go N2 was the smallest in amplitude and peaked the earliest to pseudowords presenting both an illegal double-letter position and an identity never encountered. At this stage, the two factors showed additive effects, suggesting an independent coding. The factors of identity and position of double letters interacted much later in the process, at the P3 level, around 300–400 msec on frontal and central sites, in line with the lexical decision data obtained in the behavioral study. Overall, these results show that the speed of lexical decision may depend on orthographic information coded independently by the identity and position of letters in a word.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2007) 19 (3): 543–555.
Published: 01 March 2007
Abstract
View article
PDF
The degree of commonality between the perceptual mechanisms involved in processing faces and objects of expertise is intensely debated. To clarify this issue, we recorded occipito-temporal event-related potentials in response to faces when concurrently processing visual objects of expertise. In car experts fixating pictures of cars, we observed a large decrease of an evoked potential elicited by face stimuli between 130 and 200 msec, the N170. This sensory suppression was much lower when the car and face stimuli were separated by a 200-msec blank interval. With and without this delay, there was a strong correlation between the face-evoked N170 amplitude decrease and the subject's level of car expertise as measured in an independent behavioral task. Together, these results show that neural representations of faces and nonface objects in a domain of expertise compete for visual processes in the occipito-temporal cortex as early as 130–200 msec following stimulus onset.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2005) 17 (10): 1652–1666.
Published: 01 October 2005
Abstract
View article
PDF
One of the most impressive disorders following brain damage to the ventral occipitotemporal cortex is prosopagnosia, or the inability to recognize faces. Although acquired prosopagnosia with preserved general visual and memory functions is rare, several cases have been described in the neuropsychological literature and studied at the functional and neural level over the last decades. Here we tested a brain-damaged patient (PS) presenting a deficit restricted to the category of faces to clarify the nature of the missing and preserved components of the face processing system when it is selectively damaged. Following learning to identify 10 neutral and happy faces through extensive training, we investigated patient PS's recognition of faces using Bubbles, a response classification technique that sampled facial information across the faces in different bandwidths of spatial frequencies [Gosselin, F., & Schyns, P. E., Bubbles: A technique to reveal the use of information in recognition tasks. Vision Research, 41, 2261-2271, 2001]. Although PS gradually used less information (i.e., the number of bubbles) to identify faces over testing, the total information required was much larger than for normal controls and decreased less steeply with practice. Most importantly, the facial information used to identify individual faces differed between PS and controls. Specifically, in marked contrast to controls, PS did not use the optimal eye information to identify familiar faces, but instead the lower part of the face, including the mouth and the external contours, as normal observers typically do when processing unfamiliar faces. Together, the findings reported here suggest that damage to the face processing system is characterized by an inability to use the information that is optimal to judge identity, focusing instead on suboptimal information.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2001) 13 (7): 1019–1034.
Published: 01 October 2001
Abstract
View article
PDF
Where and how does the brain discriminate familiar and unfamiliar faces? This question has not been answered yet by neuroimaging studies partly because different tasks were performed on familiar and unfamiliar faces, or because familiar faces were associated with semantic and lexical information. Here eight subjects were trained during 3 days with a set of 30 faces. The familiarized faces were morphed with unfamiliar faces. Presented with continua of unfamiliar and familiar faces in a pilot experiment, a group of eight subjects presented a categorical perception of face familiarity: there was a sharp boundary in percentage of familiarity decisions between 40% and 60% faces. In the main experiment, subjects were scanned (PET) on the fourth day (after 3 days of training) in six conditions, all requiring a sex classification task. Completely novel faces (0%) were presented in Condition 1 and familiar faces (100%) in Condition 6, while faces of steps of 20% in the continuum of familiarity were presented in Conditions 2 to 5 (20% to 80%). A principal component analysis (PCA) indicated that most variations in neural responses were related to the dissociation between faces perceived as familiar (60% to 100%) and faces perceived as unfamiliar (0 to 40%). Subtraction analyses did not disclose any increase of activation for faces perceived as familiar while there were large relative increases for faces perceived as unfamiliar in several regions of the right occipito-temporal visual pathway. These changes were all categorical and were observed mainly in the right middle occipital gyrus, the right posterior fusiform gyrus, and the right inferotemporal cortex. These results show that (1) the discrimination between familiar and unfamiliar faces is related to relative increases in the right ventral pathway to unfamiliar/novel faces; (2) familiar and unfamiliar faces are discriminated in an all-or-none fashion rather than proportionally to their resemblance to stored representations; and (3) categorical perception of faces is associated with abrupt changes of brain activity in the regions that discriminate the two extremes of the multidimensional continuum.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2000) 12 (5): 793–802.
Published: 01 September 2000
Abstract
View article
PDF
Behavioral studies indicate a right hemisphere advantage for processing a face as a whole and a left hemisphere superiority for processing based on face features. The present PET study identifies the anatomical localization of these effects in well-defined regions of the middle fusiform gyri of both hemispheres. The right middle fusiform gyrus, previously described as a face-specific region, was found to be more activated when matching whole faces than face parts whereas this pattern of activity was reversed in the left homologous region. These lateralized differences appeared to be specific to faces since control objects processed either as wholes or parts did not induce any change of activity within these regions. This double dissociation between two modes of face processing brings new evidence regarding the lateralized localization of face individualization mechanisms in the human brain.