Human faces and bodies represent various socially important signals. Although adults encounter numerous new people in daily life, they can recognize hundreds to thousands of different individuals. However, the neural mechanisms that differentiate one person from another person are unclear. This study aimed to clarify the temporal dynamics of the cognitive processes of face and body personal identification using face-sensitive ERP components (P1, N170, and N250). The present study performed three blocks (face–face, face–body, and body–body) of different ERP adaptation paradigms. Furthermore, in the above three blocks, ERP components were used to compare brain biomarkers under three conditions (same person, different person of the same sex, and different person of the opposite sex). The results showed that the P1 amplitude for the face–face block was significantly greater than that for the body–body block, that the N170 amplitude for a different person of the same sex condition was greater than that for the same person condition in the right hemisphere only, and that the N250 amplitude gradually increased as the degree of face and body sex–social categorization grew closer (i.e., same person condition > different person of the same sex condition > different person of the opposite sex condition). These results suggest that early processing of the face and body processes the face and body separately and that structural encoding and personal identification of the face and body process the face and body collaboratively.

Face and Body Combined Into a Coherent Whole-person Identity Representation

Human faces represent various socially important signals, including information related to identity, sex, emotional state, age, and race. Although adults encounter numerous new people and faces in daily life, they can recognize hundreds to thousands of different individual faces (Jenkins, Dowsett, & Burton, 2018; Maurer, Le Grand, & Mondloch, 2002; Bruce & Valentine, 1985). In addition, adults have the capacity to identify a specific face from numerous different faces (Haxby, Hoffman, & Gobbini, 2000). This facial personal identification is important for social communication and developing good personal relationships (Haxby et al., 2000). However, the neural mechanisms of temporal processing in the human brain that differentiate one face from another are unclear (Nemrodov, Niemeier, Patel, & Nestor, 2018; Zheng, Mondloch, & Segalowitz, 2012; Zheng, Mondloch, Nishimura, Vida, & Segalowitz, 2011). Moreover, perception studies typically focus on either face or body perception, with limited attention to integrating whole-person perception (Hu, Baragchizadeh, & O'Toole, 2020). In terms of temporal processing, it remains unclear how the face and body are combined into a coherent person identity representation (Foster et al., 2021). In this introduction, we briefly cover face and body perception, commonly addressed in prior studies, before outlining the study's aim to investigate whole-person perception.

Time Course of Face Perception Neural Processing and ERP Componets in Personal Identification

fMRI (Puce, Allison, Asgari, Gore, & McCarthy, 1996; Puce, Allison, Gore, & MacCarthy, 1995) studies have shown that the fusiform gyrus of the human brain is active in face perception processing. Consequently, the fusiform gyrus has been reported to respond more strongly to faces than to nonface objects (Kanwisher, McDermott, & Chun, 1997; Puce et al., 1995). However, although the face versus nonface distinction is clear (Kanwisher et al., 1997; Puce et al., 1995), the neural mechanisms underlying the perception and recognition of a specific individual face remain unclear (Nemrodov et al., 2018; Zheng et al., 2011, 2012).

In terms of neuroimaging techniques, ERPs result in less of a mental and physiological load for participants than fMRI; therefore, ERPs are frequently employed to clarify the neural mechanisms related to face perception processing. However, fMRI has a high spatial resolution (Cichy & Oliva, 2020; Huster, Debener, Eichele, & Herrmann, 2012) that can identify brain areas related to face perception processing (Haxby et al., 2000; Kanwisher et al., 1997), whereas ERP has a high temporal resolution (Cichy & Oliva, 2020; Huster et al., 2012) that can examine temporal processing at shorter intervals (Rossion & Caharel, 2011; Bentin, Allison, Puce, Perez, & MacCarthy, 1996). Therefore, this study addressed the temporal processing of face personal identification using ERP. Previous ERP studies reported that, in the order of emersion, the P1 (early face processing), N170 (structural encoding of the face), and N250 (face personal identification) components reflect the temporal processing of facial perception in the brain (Rubianes et al., 2021; Schweinberger & Neumann, 2016; Zheng et al., 2011, 2012).

Face-sensitive P1, a sharp positive ERP component, appears nearly 100 msec after the face stimulus presentation (Rossion & Caharel, 2011) and has a medial or lateral occipital scalp distribution (Tanaka, 2016). The P1 component reflects the perception of low-level visual cues (such as color) of early face processing in the brain (Rossion & Caharel, 2011). Furthermore, when participants were presented with caricatures of human faces, cars, insects, and real human faces, the P1 amplitude was significantly greater for caricatures of human faces than for insects (Nihei, Minami, & Nakauchi, 2018). This finding suggested that the P1 amplitude reflected judgments about whether a visual object is human-face-like and occurs relatively early in face processing (Nihei et al., 2018). However, despite some studies showing differences in the P1 component between faces and nonface stimuli (Tanaka, 2020; Rossion & Caharel, 2011), Tanaka (2018a) showed that there was no significant difference in P1 amplitude between upright faces and two- and three-dimensional objects. Therefore, it remains unclear whether the P1 component is sensitive to faces. Generally, researchers believe that this difference is because of the differing properties of faces and nonface stimuli (Tanaka, 2018a, 2020; Rossion & Caharel, 2011). In addition, fMRI studies have estimated that the neural generator of the P1 component lies in the occipital face area (OFA) of the brain (Pitcher, Walsh, & Duchaine, 2011; Sadeh, Podlipsky, Zhdanov, & Yovel, 2010), which is located in the inferior occipital gyrus (Zimmermann, Stratil, Thome, Sommer, & Jansen, 2019).

After the P1 component emerges, the face-sensitive N170 comprises a sharp negative ERP component, which is maximal approximately 170 msec after the presentation of a human face, and has a posterior temporal scalp distribution (Tanaka, 2018a, 2018b, 2020; Bentin et al., 1996). The face-sensitive N170 has been associated with early high-level face perception processing and structural encoding of the face (Eimer, 2000; Bentin et al., 1996). According to Eimer (2000, 2011), structural encoding of the face is related to both domain-specific (the processing of individual face components, such as the eyes and mouth) and domain-general (the configurational analysis of faces) processing of facial information. Rossion and Caharel (2011) suggested that face perception processing represented by face-sensitive P1 and N170 components can be functionally dissociated, with P1 driven by low-level visual cues and N170 reflecting high-level face perception processing. In other words, rough face processing, including detecting the existing shapes as human-face-like, is performed in the early face processing stages represented by P1, whereas detailed face processing is performed in the facial structural encoding stages represented by N170 (Nihei et al., 2018). Previous fMRI studies have indicated that the fusiform area of the brain is the neural source of the N170 component (Gao, Conte, Richards, Xie, & Hanayik, 2019; Sadeh et al., 2010; Puce et al., 1995).

The N170 component is related to the structural encoding of the face (Eimer, 2000; Bentin et al., 1996), but its relation to personal face identification is unclear (Caharel & Rossion, 2021). Although the N170 may habituate to repeated presentations of a single-face identity (Cao, Ma, & Qi, 2015; Heisz, Watter, & Shedden, 2006), it is not sensitive to personal identification (Rubianes et al., 2021; Alzueta, Melcón, Poch, & Capilla, 2019; Butler, Mattingley, Cunnington, & Suddendorf, 2013; Zheng et al., 2012; Amihai, Deouell, & Bentin, 2011).

After the N170 emerges, the N250 component as a subsequent negative ERP appears nearly 250 msec after human face presentation and is more strongly associated with face personal identification (Abreu, Fernández-Aguilar, Ferreira-Santos, & Fernandes, 2023; Zheng et al., 2012; Schweinberger, Huddy, & Burton, 2004; Schweinberger, Pickering, Jentzsch, Burton, & Kaufmann, 2002). In addition, the N250 component reflects self-face identification, particularly distinguishing between familiar and unknown persons (Abreu et al., 2023; Rubianes et al., 2021; Alzueta et al., 2019; Miyakoshi, Kanayama, Nomura, Iidaka, & Ohira, 2008). Moreover, previous studies showed that recently learned faces elicit greater negative amplitudes in the N250 component than another unfamiliar faces (Popova & Wiese, 2023; Tanaka, Curran, Porterfield, & Collins, 2006). N250 has a posterior temporal scalp distribution similar to N170 (Rubianes et al., 2021; Zheng et al., 2012; Schweinberger et al., 2004, 2002). Moreover, the neural generator of N250 has been estimated to lie in the fusiform area of the human brain, similar to those of N170 (Gentile & Jansma, 2012).

Haxby and colleagues (2000) proposed a cognitive neuroscience model of a distributed neural system for facial perception in the human brain. Subsequently, Pitcher and colleagues (2011) modified Haxby's model by adding temporal face processing from intracranial ERP studies (Barbeau et al., 2008; Allison, Puce, Spencer, & McCarthy, 1999; McCarthy, Puce, Belger, & Allison, 1999; Puce, Allison, & McCarthy, 1999). In Pitcher's model, the early stages of face processing occur in three regions of the occipitotemporal visual cortex: the OFA (Gauthier, Skudlarski, Gore, & Anderson, 2000; Gauthier, Tarr, et al., 2000), fusiform face area (FFA; Kanwisher, 2006; Kanwisher, Tong, & Nakayama, 1998; Wojciulik, Kanwisher, & Driver, 1998; Kanwisher et al., 1997), and superior temporal sulcus (STS; Hoffman & Haxby, 2000; Puce, Allison, Bentin, Gore, & McCarthy, 1998; Wicker, Michel, Henaff, & Decety, 1998). Meanwhile, later stages of face processing are related to interactions with various regions in the human brain (e.g., OFA, FFA, STS, amygdala, insula, and anterior temporal regions). In addition, according to a simplified cognitive model of face perception (Schweinberger & Neumann, 2016) modified from Haxby's model, the approximate time course of the subprocesses involved in face perception comprises early face processing, structural encoding of the face, and face personal identification. Previous ERP studies have shown that the P1 (early face processing), N170 (structural encoding of the face), and N250 (face personal identification) components reflect the temporal processing of facial perception in the brain (Zheng et al., 2011, 2012; Rossion & Caharel, 2011; Eimer, 2000; Schweinberger et al., 2002, 2004; Bentin et al., 1996) and that these face-sensitive ERP components reflect the approximate time course of the subprocesses involved in face perception (Schweinberger & Neumann, 2016). In addition, face perception was processed by a dynamic reshaping of the network architecture, characterized by the emergence of hubs located in the occipital and temporal regions of the human brain (Maffei & Sessa, 2021a, 2021b). According to Maffei and Sessa (2021a, 2021b), the importance of this dynamic reshaping can be observed in the same time window as the face-sensitive N170.

The P1 and N170 components are hypothesized to reflect the early stages of face processing (Tanaka, 2021) because their neural sources have been shown to be located in the OFA (Pitcher et al., 2011; Sadeh et al., 2010) and FFA (Gao et al., 2019; Sadeh et al., 2010; Puce et al., 1995), respectively. In later stages, both models (Pitcher et al., 2011; Haxby et al., 2000) hypothesize that there are interactions with various regions in the human brain (e.g., OFA, FFA, STS, amygdala, insula, and anterior temporal regions). After the N170 (structural encoding of the face) emerges, the neural generators of the N250 component (face personal identification) have been estimated to lie in the FFA (Gentile & Jansma, 2012). Therefore, as the FFA functions differently at 170 msec (structural encoding of the face) and 250 msec (face personal identification) after face perception (Schweinberger & Neumann, 2016), it may be presumed that the FFA interacts with various regions in the human brain at 250 msec after face perception. Ghuman and colleagues (2014) investigated the temporal dynamics of facial expression perception processing using electrodes placed directly on the FFA in humans. Although early FFA activity (50–75 msec) contained early face perception processing, late activity between 200 and 500 msec contained facial expression perception processing and individual differences in facial features (Ghuman et al., 2014). Moreover, according to Volfart and colleagues (2022), when intracerebral electricity stimulated the right anterior fusiform gyrus anteriorly located to the right FFA, facial identification was impaired in the human brain. Furthermore, the FFA is primarily related to high-level face personal identification processing (e.g., social traits and sex), whereas the OFA is primarily related to lower-level face personal identification processing (Tsantani et al., 2021). Considering the function of the FFA as a neural generator of N250 and the models of temporal face processing (Schweinberger & Neumann, 2016; Pitcher et al., 2011; Haxby et al., 2000), the N250 component is hypothesized to reflect the later stages of face processing.

Time Course of Body Perception Neural Processing and ERP Components in Personal Identification

The temporal dynamics of the cognitive processes underlying body personal identification remain unclear (Rice, Phillips, Natu, An, & O'Toole, 2013). Specifically, the N170 component reflects face and body perceptions (Li, 2021; Proverbio, Ornaghi, & Gabaro, 2018; Hietanen, Kirjavainen, & Nummenmaa, 2014; Stekelenburg & de Gelder, 2004). Furthermore, the partial regions of fusiform gyrus has been subdivided into the FFA (Kanwisher, 2006; Kanwisher et al., 1997, 1998; Wojciulik et al., 1998) and fusiform body area (FBA; Kanwisher, 2006; Peelen & Downing, 2005; Schwarzlose, Baker, & Kanwisher, 2005), which are distinct but adjacent regions with a strong selectivity for only faces (FFA) or only bodies (FBA). The extrastriate body area (EBA) is related to body perception (Downing, Jiang, Shuman, & Kanwisher, 2001), and the posterior regions of the fusiform gyrus (FBA and FFA) are nearby but distinct from the body-selective (EBA) and face-selective (OFA) regions (Harry, Umla-Runge, Lawrence, Graham, & Downing, 2016; Peelen & Downing, 2007; Taylor, Wiggett, & Downing, 2007). Human bodies, including faces, provide important social information, which contributes to the body personal identification of other people (Rice et al., 2013; Minnebusch & Daum, 2009; Peelen & Downing, 2007). Therefore, the temporal dynamics of the cognitive processes underlying personal body identification should be further investigated. According to Hu and colleagues (2020), face and body perception processing interacts in concert to support whole-person perception. For example, an individual's facial picture can predict the body shape of that individual (Holzleitner et al., 2014). Moreover, some previous studies adopted cropped (headless) images of the human body (e.g., Foster et al., 2021; Thierry et al., 2006) to exclude the facial characteristics of body perception; however, it remains unclear whether we can extend the results of these studies to natural human whole bodies, which underscores the necessity of evaluating the influences of natural human whole bodies (including the head). Burton, Wilson, Cowan, and Bruce (1999) showed that personal identification performance declined substantially when the face was obscured but remained high when the body was obscured. O'Toole and colleagues (2011) tested people's ability to match identity in pairs of images and videos of unfamiliar people, using stimuli that showed the entire person, the person with the face obscured, or the person with the body obscured. Personal identification performance was best when people viewed the entire person (the face and body) in motion (O'Toole et al., 2011). According to Rice and colleagues (2013), these previous findings (O'Toole et al., 2011; Burton et al., 1999) supported reports that people rely heavily on facial features over body features when making personal identification decisions. Therefore, in accordance with previous studies (Li, 2021; Proverbio et al., 2018; Hietanen et al., 2014), the present study used whole-body images.

Aims and Hypotheses

We aimed to clarify the temporal dynamics of the cognitive processes related to face and body personal identification by employing the ERP adaptation paradigm, in which the face or body stimulus was preceded by a stimulus of another category (e.g., a face or a body presented in a different format; Tanaka, 2016, 2021; Caharel, Collet, & Rossion, 2015; Zimmer, Zbanţ, Németh, & Kovács, 2015; Feng, Luo, & Fu, 2013; Fu, Feng, Guo, Luo, & Parasuraman, 2012; Zimmer & Kovács, 2011; Eimer, Kiss, & Nicholas, 2010; Kovács et al., 2006). The present study used three blocks of different ERP adaptation paradigms: from a face adaptation stimulus to a face target stimulus (face–face block), from a face adaptation stimulus to a body target stimulus (face–body block), and from a body adaptation stimulus to a body target stimulus (body–body block). According to Fu and colleagues (2012), previous studies have found physical differences to be confounded when directly comparing different stimuli (e.g., faces vs. cars or faces vs. words) because P1 and N1, which overlap with face-sensitive N170, are sensitive to low-level physical properties, such as stimulus size and spatial frequency (Itier & Taylor, 2004; Hillyard, Teder-Sälejärvi, & Münte, 1998). In contrast, the effects of physical differences can be balanced within each category in the ERP adaptation paradigm (Feng et al., 2013; Fu et al., 2012), thereby mitigating the confounding effect between these two types of stimuli. Therefore, the present study using the ERP adaptation paradigm should reveal categorical differences of face and body personal identification without being confounded by low-level physical differences between faces and bodies.

Although several studies have indicated that the N170 component is insensitive to facial sex processing (Mouchetant-Rostaing & Giard, 2003; Mouchetant-Rostaing, Giard, Bentin, Aguera, & Pernier, 2000), conflicting reports have found that the N170 component is marginally sensitive to facial sex processing (Brunet, 2023; Kloth, Schweinberger, & Kovács, 2010). Therefore, this study utilized ERP components to compare the biomarkers of the brain under three conditions: same person (adaptation and target stimuli were the same person), different person of the same sex (adaptation and target stimuli were different persons of the same sex), and different persons of the opposite sex (adaptation and target stimuli were different persons of the opposite sex).

Moreover, this study aimed to clarify the temporal dynamics of the cognitive and neural processes related to face and body personal identification by investigating the following three hypotheses. The P1 amplitude elicited by the face–face block will be significantly greater than that of the body–body block (Hypothesis 1). The N170 amplitude for different persons of the same sex will be significantly larger than that for the same person condition in the right hemisphere only (Hypothesis 2). The N250 amplitude will be the highest, medium, and smallest for the same person condition, different persons of the same sex, and different persons of the opposite sex, respectively (Hypothesis 3).

Hypothesis 1 was derived from the fact that the neural generator of the P1 component is the OFA (Pitcher et al., 2011; Sadeh et al., 2010) and not the EBA. In addition, because P1 and N170 are not sensitive to personal identification (Rubianes et al., 2021; Alzueta et al., 2019), this study predicted that there would be no significant differences in P1 amplitudes for the three conditions. However, Hypothesis 2 was formulated based on the results of a previous study that found that the N170 amplitude tended to increase over the right hemisphere following male versus androgynous adaptation (Kloth et al., 2010). Lastly, Hypothesis 3 takes into consideration that the N250 component can be biased toward the social categorization or individuation of faces (Rollins, Olsen, & Evans, 2020). Namely, according to Alzueta and colleagues (2019), N250 amplitudes in the left occipitotemporal sites gradually increase with increasing degrees of face social categorization (i.e., self face > friend face > unknown face).

Participants

Twenty-one East Asian participants (male, n = 9; female, n = 12; mean age = 21.0, SD = 0.9 [age range = 19–22] years) participated in this study. All participants were right-handed, had normal or corrected vision, and had no history of cognitive or neurological disorders. All participants signed an informed consent form after the experimental procedure was fully explained. This study was approved by the Ethics Committee of Otemon Gakuin University and conformed to the Declaration of Helsinki. We recruited participants from the Otemon Gakuin University student population. Many previous, face-sensitive ERP studies demonstrated that 20 was an adequate sample size (e.g., Popova & Wiese, 2023; Schroeger, Ficco, Wuttke, Kaufmann, & Schweinberger, 2023; Rubianes et al., 2021; Tanaka, 2016, 2018a, 2018b, 2020, 2021). Therefore, this study aimed to have approximately the same sample size as these previous studies.

Stimuli

All stimuli were grayscale pictures of faces and bodies from 20 young East Asian adults who were unfamiliar to all participants in this study. Half of the stimuli depicted men, and the other half were women. Pictures of faces and bodies were obtained from various websites (e.g., https://studio728.jp/blog/10828/). Grayscale editing with a white background was performed using Adobe Photoshop 12 (Figure 1). Facial stimuli were presented in a front-facing view and equalized for mean luminance (luminance values = 13.4 cd/m2) and the same visual angle of 6.9° × 7.2° (horizontal × vertical; Figure 1). Body stimuli were facing forward and equalized for mean luminance (luminance values = 12.8 cd/m2) and the same visual angle of 4.3° × 15.7° (horizontal × vertical; Figure 1). Moreover, the postures of body stimuli were controlled to be as similar as possible (e.g., upright body stimuli with neutral facial expressions and facing forward), excluding extreme postures (e.g., spreading arms and legs wide, bending at the waist; Figure 1). In the same person condition of the face–body block, face pictures were cut from the same person's body pictures (Figure 1). All stimuli were shown at the center of a 22-in. cathode ray tube monitor (Mitsubishi, Diamondtron M2, RDF223G) set to a screen resolution of 1280 × 1024. The refresh rate was 100 Hz. This monitor was placed 80 cm in front of each participant. As previous studies have demonstrated larger N170 amplitudes for other-race facial stimuli relative to own-race stimuli (Wiese, Kaufmann, & Schweinberger, 2014; Stahl, Wiese, & Schweinberger, 2010), all stimuli and participants were East Asians only.

Figure 1.

(A) Examples of three conditions in three blocks. (B) Example of the timeline in a single trial. The words in the judgment screen, “同一人物,” “同性の別人,” and “異性の別人,” mean “same person,” “different person of the same sex,” and “different person of the opposite sex,” respectively, in Japanese.

Figure 1.

(A) Examples of three conditions in three blocks. (B) Example of the timeline in a single trial. The words in the judgment screen, “同一人物,” “同性の別人,” and “異性の別人,” mean “same person,” “different person of the same sex,” and “different person of the opposite sex,” respectively, in Japanese.

Close modal

Experimental Procedure

All participants performed three blocks: face–face, face–body, and body–body blocks. Both adaptation and target stimuli were faces in the face–face block. The adaptation stimulus was the face, and the target stimulus was the body in the face–body block. Both adaptation and target stimuli were bodies in the body–body block (Figure 1). To reduce mental and physical fatigue, all participants performed each block on a separate day. The execution order of each block was randomly and almost equally assigned to all participants.

Furthermore, all participants performed all three conditions in each block, where the adaptation and target stimuli were the same person, different persons of the same sex, or different persons of the opposite sex (Figure 1).

The participants were seated 80 cm from a 22-in. cathode ray tube monitor. All stimuli were displayed using the Multi Trigger System (Medical Try System). According to a manual on the Medical Try System website (https://www.medical-trys.com/items/Outline_MultiTriggerSystem.pdf), the Multi Trigger System can transmit a digital signal to the electroencephalograph that is synchronized with the visual target stimuli. Thus, this study identified ERPs that are linked to the visual target stimuli. Each trial in the above three blocks, consisting of three conditions, was executed in the following order: (1) A fixation mark (+) was presented for 500 msec, followed by an ISI of 1000 msec; (2) an adaptation face or body stimulus was presented for 500 msec, followed by an ISI of 1000 msec; (3) a target face or body stimulus was presented for 500 msec, followed by an ISI of 500 msec; and (4) a judgment screen was presented for 1000 msec (Figure 1). The intertrial interval varied randomly between 500 and 1500 msec.

The three types of target stimuli (same person, different person of the same sex, and different person of the opposite sex) were presented in random order and with equal probability. On the judgment screen, the participant was instructed to compare an adaptation face or body stimulus with a target face or body stimulus; to judge whether these two stimuli were the same person, different person of the same sex, or different person of the opposite sex as quickly and accurately as possible; and to respond by pressing one of the three buttons, which were assigned numbers 1, 2, or 3, with their right index finger. The three types of target stimuli were randomly assigned a button number (1, 2, or 3) for each participant (Figure 1). RTs were measured using a digital timer accurate to 1 msec, beginning with the onset of the judgment screen presentation and finishing once the participants had responded. Participants performed 10 practice trials, followed by 80 trials for each condition, totaling 240 trials in each block and 720 trials for all three blocks over 3 days (Figure 1).

Electrophysiological Recording and Analysis

EEG and EOG data were collected using an elastic cap from a 128-channel system (HydroCel Geodesic Sensor Net, Electrical Geodesics, Inc.) with the standard EGI Net Station 5.2.01 package. The EEG was recorded using Ag/AgCl electrodes according to the 10–5 system (Jurcak, Tsuzuki, & Dan, 2007; Oostenveld & Praamstra, 2001). The impedance of all electrodes was maintained below 5 kΩ during data acquisition. All the electrodes were physically referenced to Cz. Next, all electrodes were rereferenced offline to the common average, and EEG data were filtered offline using 0.01–30 Hz and digitized at a sampling rate of 500 Hz. Trials with artifacts, including eye blinks and eye movements in which both the VEOG and HEOG voltages exceeded 140 μV during the recording epoch, and trials with wrong answers were cut off from averaging. EEG data processing was conducted in the MATLAB R2021a (MathWorks) environment running EEGLAB (Version 14_1_2b; Delorme & Makeig, 2004), and scalp maps were made by EEGLAB.

Stimulus-locked ERPs were derived separately for each of the three types of target stimuli in each block from 200 msec before to 400 msec after stimulus presentation and were baseline corrected using the 200-msec prestimulus window. Grand mean ERP traces of each condition were calculated by averaging the ERP from all participants. Each ERP component, P1 (Tanaka, 2016, 2018a, 2018b, 2020, 2021), N170 (Tanaka, 2016, 2018a, 2018b, 2020, 2021), and N250 (Rubianes et al., 2021; Miyakoshi et al., 2008; Tanaka et al., 2006), was created by including groups of electrodes showing the strongest activity within the corresponding time windows, while being consistent with the common choices in previous ERP studies. Moreover, we also selected the electrodes for analysis based on fMRI studies showing that the neural source of P1 is OFA (Pitcher et al., 2011; Sadeh et al., 2010) and the neural sources of N170 and N250 are FFA (Gao et al., 2019; Gentile & Jansma, 2012; Sadeh et al., 2010; Puce et al., 1995). A time window of 80–100 msec was used to measure the mean P1 amplitude data collected at electrode sites O1 and O2 of the left and right hemispheres, respectively. For the N170 and N250 analyses, the electrode sites P7, PO7, PO8, and P8 were analyzed, with time windows of 120–150 and 220–250 msec for the mean N170 and N250 amplitude data, respectively. Although different from some previous studies, the time windows used in this study were almost consistent with some previous studies of N170 (Tanaka, 2016, 2018a, 2018b, 2020, 2021) and N250 (Rubianes et al., 2021; Miyakoshi et al., 2008; Tanaka et al., 2006). Furthermore, time windows were selected following a visual inspection of the main ERP effects, based on previous studies (Rubianes et al., 2021; Tanaka, 2016, 2018a, 2018b, 2020, 2021; Miyakoshi et al., 2008; Tanaka et al., 2006). ERP components have the polarity of electric charges; therefore, the interpretation of amplitude values differs between positive and negative ERPs. Larger numerical amplitude values of positive ERPs can be interpreted as greater amplitudes, whereas smaller numerical amplitude values of negative ERPs correspond to greater amplitudes. The participants that were included in the analyses had approximately 67 EEG trials per types of target stimuli in the three blocks on average.

Statistical Analyses

All statistical analyses were conducted using SPSS (Version 27; IBM Corp). RTs were analyzed using a two-way (3 × 3) repeated-measures ANOVA across the three blocks and three conditions. P1 mean amplitudes were analyzed using a three-way (3 × 3 × 2) repeated-measures ANOVA across three blocks, three conditions, and two electrodes (O1 and O2). N170 and N250 mean amplitudes were analyzed using a four-way (3 × 3 × 2 × 2) repeated-measures ANOVA across three blocks, three conditions, two hemispheres (left and right), and two electrodes (P7 vs. PO7, PO8 vs. P8). Statistically significant main and interaction effects were identified at p < .05, with Greenhouse–Geisser corrections applied to p values. Bonferroni correction of multiple post hoc comparisons was performed for the RT and ERP data. The Bonferroni correction was chosen for post hoc testing because it calculates more rigorous significance levels than the Tukey or Scheffé corrections (Cabral, 2008). This study used repeated-measures ANOVA on ERP (P1, N170, and N250) data recorded on different days for the same participants. Therefore, the test–retest reliability of the ERP data should be considered. A previous study completed the battery of various ERP components by the same participants on two occasions 1 month apart (Cassidy, Robertson, & O'Connell, 2012) and consistently showed a strong test–retest reliability at follow-up; this stability was evidenced across all statistical companions. Therefore, ANOVA was considered to be an appropriate statistical analysis for the ERP data in this study.

Behavioral Performance

Table 1 shows the mean RTs for the Three Blocks (face–face, face–body, and body–body) and Three Conditions (same person, different person of the same sex, and different person of the opposite sex). A significant main effect was observed for the Three Conditions on the mean RTs, F(2, 40) = 3.48, p = .041, ηp2 = .148. The mean RTs for the different person of the same sex condition were significantly longer than those for the same person condition (p < .05). However, there was no significant effect of the Three Blocks, F(2, 40) = 2.78, p = .079, ηp2 = .122, or the Three Blocks × Three Conditions interaction, F(4, 80) = 2.18, p = .104, ηp2 = .098, on mean RTs.

Table 1.

Mean RTs (msec) and Standard Deviation for Three Conditions in Three Blocks

Face–Face BlockFace–Body BlockBody–Body Block
MeanSDMeanSDMeanSD
Same person 335.90 58.66 349.68 86.54 334.73 66.92 
Different person of the same sex 373.74 77.17 363.11 76.10 333.31 67.78 
Different person of the opposite sex 371.80 65.22 345.81 76.54 338.22 60.81 
Face–Face BlockFace–Body BlockBody–Body Block
MeanSDMeanSDMeanSD
Same person 335.90 58.66 349.68 86.54 334.73 66.92 
Different person of the same sex 373.74 77.17 363.11 76.10 333.31 67.78 
Different person of the opposite sex 371.80 65.22 345.81 76.54 338.22 60.81 

ERP Results

P1 Component

Figure 2 shows the P1 component, and Table 2 shows the mean P1 amplitude at the two electrode sites (O1 and O2) for the three blocks and three conditions. As P1 is a positive ERP, the larger the numerical values of the amplitude, the greater the amplitude (Table 2).

Figure 2.

Grand-averaged ERP waveforms and scalp topographic maps for P1. (A) Target stimulus-locked average ERP waveforms (P1 component) at O1 and O2 for three conditions in three blocks. (B) Scalp topographic maps for P1.

Figure 2.

Grand-averaged ERP waveforms and scalp topographic maps for P1. (A) Target stimulus-locked average ERP waveforms (P1 component) at O1 and O2 for three conditions in three blocks. (B) Scalp topographic maps for P1.

Close modal
Table 2.

Mean P1 Amplitude (μV) and Standard Deviation for Three Conditions in Three Blocks at Two Electrode Sites

Face–Face BlockFace–Body BlockBody–Body Block
O1O2O1O2O1O2
MeanSDMeanSDMeanSDMeanSDMeanSDMeanSD
Same person 5.07 3.02 4.24 2.96 4.30 2.94 3.49 2.94 3.28 2.47 2.73 2.58 
Different person of the same sex 4.85 2.90 4.38 3.01 4.38 3.87 3.97 3.55 3.60 2.78 3.07 3.04 
Different person of the opposite sex 5.14 3.24 4.43 3.02 4.37 3.54 3.85 3.19 3.61 2.66 3.27 2.43 
Face–Face BlockFace–Body BlockBody–Body Block
O1O2O1O2O1O2
MeanSDMeanSDMeanSDMeanSDMeanSDMeanSD
Same person 5.07 3.02 4.24 2.96 4.30 2.94 3.49 2.94 3.28 2.47 2.73 2.58 
Different person of the same sex 4.85 2.90 4.38 3.01 4.38 3.87 3.97 3.55 3.60 2.78 3.07 3.04 
Different person of the opposite sex 5.14 3.24 4.43 3.02 4.37 3.54 3.85 3.19 3.61 2.66 3.27 2.43 

There was a significant main effect of the Three Blocks on the P1 amplitude, F(2, 40) = 5.92, p = .008, ηp2 = .228. The P1 amplitude was significantly larger for the face–face block than for the body–body block (p < .05). However, there were no significant main effects of the Three Conditions, F(2, 40) = 0.94, p = .379, ηp2 = .045, or Two Electrodes, F(1, 20) = 3.68, p = .069, ηp2 = .155, on the P1 amplitude. In addition, no significant interactions were detected between the Three Blocks × Three Conditions, F(4, 80) = 0.29, p = .814, ηp2 = .014; Three Blocks × Two Electrodes, F(2, 40) = 0.18, p = .835, ηp2 = .009; Three Conditions × Two Electrodes, F(2, 40) = 2.73, p = .082, ηp2 = .120; or Three Blocks × Three Conditions × Two Electrodes, F(4, 80) = 0.39, p = .710, ηp2 = .019.

N170 Component

Figure 3 shows the N170 component for three blocks and three conditions at four electrode sites (P7, P8, PO7, and PO8), and Table 3 displays the mean N170 amplitudes. As the N170 component is a negative ERP, the smaller the numerical value of the amplitude, the greater the amplitude (Table 3).

Figure 3.

Grand-averaged ERP waveforms and scalp topographic maps for N170 and N250. (A) Target stimulus-locked average ERP waveforms (N170 and N250) at P7, P8, PO7, and PO8 for three conditions in three blocks. (B) Scalp topographic maps for N170 and N250.

Figure 3.

Grand-averaged ERP waveforms and scalp topographic maps for N170 and N250. (A) Target stimulus-locked average ERP waveforms (N170 and N250) at P7, P8, PO7, and PO8 for three conditions in three blocks. (B) Scalp topographic maps for N170 and N250.

Close modal
Table 3.

Mean N170 Amplitude (μV) and Standard Deviation for Three Conditions in Three Blocks at Four Electrode Sites

P7PO7PO8P8
MeanSDMeanSDMeanSDMeanSD
Face–face block Same person 1.17 2.06 2.24 2.39 1.63 2.05 0.59 2.13 
Different person of the same sex 1.01 2.26 1.90 1.89 1.13 1.57 0.24 1.42 
Different person of the opposite sex 0.87 2.32 1.94 2.32 1.35 1.93 0.38 1.77 
 
Face–body block Same person 0.81 1.94 1.82 2.04 1.06 1.66 0.38 1.19 
Different person of the same sex 0.71 1.95 1.90 2.39 0.74 1.94 −0.03 1.79 
Different person of the opposite sex 0.62 2.04 1.41 2.04 1.28 1.48 0.22 0.87 
 
Body–body block Same person 0.73 1.81 1.52 2.34 0.98 2.11 0.02 1.78 
Different person of the same sex 0.68 1.41 1.31 1.92 0.47 1.72 −0.58 1.61 
Different person of the opposite sex 0.58 1.53 1.05 2.05 0.83 1.85 −0.08 1.46 
P7PO7PO8P8
MeanSDMeanSDMeanSDMeanSD
Face–face block Same person 1.17 2.06 2.24 2.39 1.63 2.05 0.59 2.13 
Different person of the same sex 1.01 2.26 1.90 1.89 1.13 1.57 0.24 1.42 
Different person of the opposite sex 0.87 2.32 1.94 2.32 1.35 1.93 0.38 1.77 
 
Face–body block Same person 0.81 1.94 1.82 2.04 1.06 1.66 0.38 1.19 
Different person of the same sex 0.71 1.95 1.90 2.39 0.74 1.94 −0.03 1.79 
Different person of the opposite sex 0.62 2.04 1.41 2.04 1.28 1.48 0.22 0.87 
 
Body–body block Same person 0.73 1.81 1.52 2.34 0.98 2.11 0.02 1.78 
Different person of the same sex 0.68 1.41 1.31 1.92 0.47 1.72 −0.58 1.61 
Different person of the opposite sex 0.58 1.53 1.05 2.05 0.83 1.85 −0.08 1.46 

There was a significant main effect of the Two Hemispheres on N170 amplitude, F(1, 20) = 6.39, p = .020, ηp2 = .242. However, for the N170 amplitude, there were no significant main effects of the Three Blocks, F(2, 40) = 1.78, p = .182, ηp2 = .082; Three Conditions, F(2, 40) = 2.38, p = .121, ηp2 = .106; and Two Electrodes, F(1, 20) = 0.09, p = .766, ηp2 = .005. In addition, no significant interactions were detected between Three Blocks × Three Conditions, F(4, 80) = 0.06, p = .981, ηp2 = .003; Three Blocks × Two Hemispheres, F(2, 40) = 0.03, p = .968, ηp2 = .001; Three Blocks × Two Electrodes, F(2, 40) = 1.09, p = .340, ηp2 = .052; or Three Conditions × Two Electrodes, F(2, 40) = 1.44, p = .250, ηp2 = .067; Three Blocks × Three Conditions × Two Hemispheres, F(4, 80) = 0.30, p = .828, ηp2 = .015; Three Blocks × Three Conditions × Two Electrodes, F(4, 80) = 1.53, p = .213, ηp2 = .071; Three Blocks × Two Hemisphere × Two Electrodes, F(2, 40) = 0.71, p = .489, ηp2 = .034; Three Conditions × Two Hemisphere × Two Electrodes, F(2, 40) = 0.10, p = .879, ηp2 = .005; or Three Blocks × Three Conditions × Two Hemispheres × Two Electrodes, F(4, 80) = 0.88, p = .452, ηp2 = .042. However, there were significant interactions between the Three Conditions × Two Hemispheres, F(2, 40) = 3.35, p = .047, ηp2 = .144, and Two Hemispheres × Two Electrodes, F(1, 20) = 53.56, p < .001, ηp2 = .728, for the N170 amplitude.

Simple effect analyses indicated that the N170 amplitude for the different person of the same sex condition was significantly greater than that for the same person condition in the right hemisphere only (p < .05). Moreover, simple effect analyses indicated that the N170 component of P8 was significantly more shifted in the negative direction compared with that of PO8 (p < .01).

N250 Component

Figure 3 shows the N250 component for three blocks and three conditions at four electrode sites (P7, P8, PO7, and PO8), and Table 4 displays the mean N250 amplitudes. As N250 is a negative ERP, the smaller the numerical value of the amplitude, the greater the amplitude (Table 4).

Table 4.

Mean N250 Amplitude (μV) and Standard Deviation for Three Conditions in Three Blocks at Four Electrode Sites

P7PO7PO8P8
MeanSDMeanSDMeanSDMeanSD
Face–face block Same person 2.86 3.96 4.34 4.42 4.19 3.84 2.69 3.48 
Different person of the same sex 4.01 3.53 5.32 3.32 4.52 2.99 3.02 2.25 
Different person of the opposite sex 4.28 3.67 5.73 3.82 5.56 3.49 4.17 2.78 
 
Face–body block Same person 2.87 3.50 4.69 3.89 4.15 2.98 2.85 2.22 
Different person of the same sex 3.85 3.34 5.55 4.01 4.69 3.71 3.36 3.00 
Different person of the opposite sex 4.02 3.33 5.26 3.68 5.10 3.60 3.47 2.52 
 
Body–body block Same person 2.48 3.32 3.49 3.77 3.05 2.86 1.90 2.54 
Different person of the same sex 3.60 2.84 4.50 3.22 3.83 2.74 2.61 2.38 
Different person of the opposite sex 3.90 2.98 4.43 3.43 4.46 2.91 3.45 2.52 
P7PO7PO8P8
MeanSDMeanSDMeanSDMeanSD
Face–face block Same person 2.86 3.96 4.34 4.42 4.19 3.84 2.69 3.48 
Different person of the same sex 4.01 3.53 5.32 3.32 4.52 2.99 3.02 2.25 
Different person of the opposite sex 4.28 3.67 5.73 3.82 5.56 3.49 4.17 2.78 
 
Face–body block Same person 2.87 3.50 4.69 3.89 4.15 2.98 2.85 2.22 
Different person of the same sex 3.85 3.34 5.55 4.01 4.69 3.71 3.36 3.00 
Different person of the opposite sex 4.02 3.33 5.26 3.68 5.10 3.60 3.47 2.52 
 
Body–body block Same person 2.48 3.32 3.49 3.77 3.05 2.86 1.90 2.54 
Different person of the same sex 3.60 2.84 4.50 3.22 3.83 2.74 2.61 2.38 
Different person of the opposite sex 3.90 2.98 4.43 3.43 4.46 2.91 3.45 2.52 

There were significant main effects of the Three Conditions, F(2, 40) = 15.01, p < .001, ηp2 = .429, on the N250 amplitude. However, for the N250 amplitude, there were no significant main effects of the Three Blocks, F(2, 40) = 2.49, p = .102, ηp2 = .111; Two Hemispheres, F(1, 20) = 1.27, p = .273, ηp2 = .060; and Two Electrodes, F(1, 20) = 0.08, p = .785, ηp2 = .004. In addition, no significant interactions were detected between Three Blocks × Three Conditions, F(4, 80) = 0.68, p = .502, ηp2 = .033; Three Blocks × Two Hemispheres, F(2, 40) = 0.02, p = .977, ηp2 = .001; Three Blocks × Two Electrodes, F(2, 40) = 0.31, p = .698, ηp2 = .015; Three Conditions × Two Hemispheres, F(2, 40) = 3.09, p = .062, ηp2 = .134; Three Conditions × Two Electrodes, F(2, 40) = 1.57, p = .223, ηp2 = .073; Three Blocks × Three Conditions × Two Hemispheres, F(4, 80) = 0.30, p = .814, ηp2 = .015; Three Blocks × Three Conditions × Two Electrodes, F(4, 80) = 1.22, p = .309, ηp2 = .058; Three Blocks × Two Hemisphere × Two Electrodes, F(2, 40) = 2.55, p = .096, ηp2 = .113; Three Conditions × Two Hemisphere × Two Electrodes, F(2, 40) = 0.70, p = .473, ηp2 = .034; or Three Blocks × Three Conditions × Two Hemispheres × Two Electrodes, F(4, 80) = 0.36, p = .782, ηp2 = .018. However, there was a significant interaction between the Two Hemispheres and Two Electrodes, F(1, 20) = 48.05, p < .001, ηp2 = .706, on N250 amplitude.

The N250 amplitude for the same person condition was significantly larger than that for the different person of the same sex condition (p < .05). Furthermore, the N250 amplitude for the different person of the same sex condition was significantly larger than that for the different person of the opposite sex condition (p < .05). Moreover, simple effect analyses indicated that the N250 component for P7 was significantly more shifted in the negative direction compared with that for PO7 (p < .01) and that the N250 component for P8 was significantly more shifted in the negative direction compared with that for PO8 (p < .01).

P1, N170, and N250 Components Reflect Temporal Perception and Neural Processing in Face and Body Personal Identification

We investigated the relationship between temporal perception processing in face and body personal identification and three ERP components (P1, N170, and N250). The results demonstrated that the RTs for the different person of the same sex condition were significantly longer than those for the same person condition. These differences were further reflected in the N170 component. The P1 amplitude for the face–face block was significantly greater than that for the body–body block. Moreover, the N170 amplitude in the right hemisphere was significantly greater in response to the different person of the same sex condition than to the same person condition. Furthermore, the highest N250 amplitude was observed for the same person condition, medium N250 amplitude for the different person of the same sex condition, and smallest N250 amplitude for the different person of the opposite sex condition. Therefore, the present results indicate that all three hypotheses were correct.

The neural generator of the P1 component is located in the OFA (Pitcher et al., 2011; Sadeh et al., 2010), whereas body perception is processed by the EBA (Downing et al., 2001), based on fMRI studies. Previous studies have shown that the posterior regions of the fusiform gyrus (FBA and FFA) are nearby but distinct from the body-selective (EBA) and face-selective (OFA) regions (Harry et al., 2016; Peelen & Downing, 2007; Taylor et al., 2007). Therefore, it is possible that the electric activity of the EBA, which is more similar to the OFA but further away in location (Harry et al., 2016; Peelen & Downing, 2007; Taylor et al., 2007), had a smaller effect on the P1 amplitude of the body–body block than that of the face–face block. Moreover, the present results that the P1 amplitude of the face–body block was not significantly different from that of the face–face and body–body blocks may be because faces as adaptation stimuli preceding target stimuli slightly increased the electrical activity of the OFA and P1 amplitude in the face–body block. However, the P1 results may also be more parsimoniously explained in terms of the different low-level perceptual complexity of face and body stimuli (Rossion & Caharel, 2011), with faces being more complex and thus eliciting larger P1 amplitudes.

The N170 component has been associated with early high-level face perception processing and structural encoding of the face (Eimer, 2000; Bentin et al., 1996). Current evidence is conflicting regarding the sensitivity of the N170 component to facial sex processing (Brunet, 2023; Kloth et al., 2010; Mouchetant-Rostaing & Giard, 2003; Mouchetant-Rostaing et al., 2000). However, the present results indicating that the N170 amplitude for the different person of the same sex condition was significantly greater than that for the same person condition in the right hemisphere only support the hypothesis that N170 is marginally sensitive to facial sex processing (Kloth et al., 2010). According to Kloth and colleagues (2010), the N170 amplitude tended to increase over the right hemisphere following male versus androgynous adaptation. Such findings (Kloth et al., 2010) and the present results support the idea that the two hemispheres differentially process facial sex (Parente & Tommasi, 2008). However, sex effects were observed in the N170 amplitude of the right hemisphere only; the N170 amplitude of the different person of the opposite sex condition was not significantly different from that of the same person and different person of the same sex conditions. Therefore, the sex effects on the N170 amplitude are likely modest. Moreover, it is more difficult to identify faces and bodies of the same sex than of the opposite sex; therefore, another possibility is that the larger N170 amplitude of the different person of the same sex condition simply reflects the higher difficulty of discrimination in that condition (which is also reflected in RT results). In addition, previous studies suggested that face perception neural processing is influenced by whether the face is subjectively attractive to the participants (Tanaka, 2021; Werheid, Schacht, & Sommer, 2007). Therefore, the differences noted in this study may be attributable not only to a genuine sex effect but also to subjective preferences between individuals of different sexes. Interestingly, the N170 amplitude was not significantly different among the three blocks, implying that the sex effects were observed for both faces and bodies. Human bodies (without the head) have previously been shown to elicit an ERP component later than the face-sensitive N170, namely, the N190 (Thierry et al., 2006). However, previous studies (Li, 2021; Proverbio et al., 2018; Hietanen et al., 2014) and the present results showed that whole human bodies (with the head) elicit the N170 component, indicating that the N170 component can be elicited by both human faces and whole human bodies (with the head). Furthermore, a previous study investigating the N170 repeated adaptation effect in a short ISI found that the N170 response elicited by faces was smaller when preceded by a same face adaptor than by another face adaptor (Cao et al., 2015), which is consistent with the present results showing that the N170 amplitude was significantly less in the right hemisphere for the same person condition than for the different person of the same sex condition. Therefore, the present results also support the N170 repeated adaptation effect.

Rossion and Caharel (2011) suggested that P1 is driven by low-level visual cues and that N170 reflects early high-level face perception processing. After the emergence of P1 and N170, the N250 component appears nearly 250 msec after human face presentation and is more strongly associated with personal face identification (Zheng et al., 2012; Schweinberger et al., 2004, 2002). The N250 component can be biased toward social categorization or individuation of faces (Rollins et al., 2020). For example, N250 amplitudes gradually increase in the left occipitotemporal sites with increasing degrees of face social categorization (i.e., self face > friend face > unknown face; Alzueta et al., 2019). Therefore, we hypothesized that the highest N250 amplitude would be observed for the same person, medium N250 amplitudes for different persons of the same sex, and the smallest N250 amplitude for different persons of the opposite sex, and this hypothesis was supported by the present results. Moreover, the N250 amplitude was not significantly different in the three blocks. Therefore, the above sex-social effects were observed for both faces and bodies as target stimuli.

According to both neural systems for face perception models (Pitcher et al., 2011; Haxby et al., 2000), P1 and N170 components are hypothesized to reflect the early stages of face processing (Tanaka, 2021). N250 emerges after N170, and the neural generators of N250 have been estimated to lie in the FFA along with those of N170 (Gentile & Jansma, 2012). Therefore, the FFA functions differently at 170 msec (where the N170 amplitude reflects structural encoding of the face and body) and 250 msec (where the N250 amplitude reflects face and body personal identification) after face perception (Schweinberger & Neumann, 2016). In later stages, both models (Pitcher et al., 2011; Haxby et al., 2000) assume that there are interactions with various regions in the human brain (e.g., OFA, FFA, STS, amygdala, insula, and anterior temporal regions). Therefore, as the FFA is presumed to interact with various regions at 250 msec after face perception, it is hypothesized that the N250 component reflects the later stages of face processing. Hu and colleagues (2020) suggested that the information shared by faces and bodies includes social categories, such as identity, age, sex, and race. Consequently, the neural systems for face and body perception models were further developed to incorporate the EBA (Hu et al., 2020), FBA (Hu et al., 2020), anterior temporal lobe (ATL; Hu et al., 2020; Zhao, Zhen, Liu, Song, & Liu, 2018; Collins & Olson, 2014), and orbital frontal cortex (Zhao et al., 2018). Although it remains unclear how the face and body undergo temporal processing when combined into a coherent person identity representation in the human brain (Foster et al., 2021; Hu et al., 2020), Hu and colleagues (2020) proposed a model of the neural hierarchical structure that underlies face–body integration. According to Hu and colleagues (2020), two unification centers, composed of ventral and dorsal hubs, may coexist to accommodate the representation of the whole person. The ventral hub is a neural system that interacts temporally and hierarchically from the OFA and EBA to the FFA and FBA and from the FFA and FBA to the ATL, unifying the visual semantic information (Hu et al., 2020). The dorsal hub is a neural system consisting of the STS, which unifies social agent information (Hu et al., 2020). The present results showing that three face- and body-sensitive ERP components reflect temporal perception processing in face and body personal identification partly support these models of the neural system that underlies face and body perception (Hu et al., 2020; Zhao et al., 2018; Pitcher et al., 2011; Haxby et al., 2000). Although Hu's model of face–body integration assumes at least some completely separable processing of faces and bodies, there is no clear evidence for this (Hu et al., 2020). The present results demonstrating that the P1 amplitude for the face–face block was significantly greater than that for the body–body block suggests that the first half of the early stages of face and body perception processes the face and body separately. However, the N170 and N250 amplitudes were not significantly different in the three blocks, which suggests that the second half of the early stages and the later stages of face and body perception processes the face and body collaboratively. In other words, the face and body are combined into a coherent whole-person identity representation in the second half of the early stages and later stages. Furthermore, previous studies showed that the posterior regions of the fusiform gyrus (FBA and FFA) are nearby but distinct from the body-selective EBA and face-selective OFA regions (Harry et al., 2016; Peelen & Downing, 2007; Taylor et al., 2007). The distance between EBA and OFA was greater than the distance between FBA and FFA (Harry et al., 2016; Peelen & Downing, 2007; Taylor et al., 2007). Therefore, these results and previous studies (Harry et al., 2016; Peelen & Downing, 2007; Taylor et al., 2007) suggest that early processing of the face and body occurs separately in the OFA and EBA and that structural encoding and personal identification of the face and body occurs collaboratively in the FFA and FBA. Subsequently, the face and body may be combined into a coherent whole-person identity representation in the FFA and FBA of the human brain.

Limitations and Future Directions

Recent studies have shown that there are individual differences in the increased amplitude of the N250 for familiar faces (Schroeger et al., 2023; Sommer et al., 2023). For example, Schroeger and colleagues (2023) compared high and low performers in face recognition and found that high performers had increased N250 responses to familiar faces, suggesting more robust face identity representations. Individual differences in the amplitude of N250 responses to familiar faces in face and body personal identification require detailed investigation in future studies.

Furthermore, a previous study demonstrated that the effects of adaptation duration on the P1, N170, and N250 components directly correlated with human face processing (Zimmer et al., 2015). Zimmer and colleagues (2015) utilized five adaptation durations (ranging from 200 to 5000 msec) and four adaptation conditions: Adaptor and test stimulus were identical images (repetition suppression [RS]); adaptor and test stimulus were different images of the same identity (SameID); adaptor and test stimulus depicted different identities (DiffID); and the adaptor stimulus was a Fourier phase-randomized image (No). At 200 msec, when the adaptation duration was close to that of this study, the N250 amplitudes for RS and SameID were significantly greater than that for DiffID and the N250 amplitude for RS was significantly greater than that for SameID. Although the current study compared RS and DiffID on ERPs, the SameID condition was not set, and the repetition effect was not investigated. Therefore, further research comparing RS and SameID for face and body personal identification and the ERPs is needed, including investigations into whether the modulation of ERP components may be ascribed to a mere repetition effect that is unrelated to face and body personal identification.

Only correlational evidence and possible inferences have identified the neural sources of the face- and body-sensitive P1, N170, and N250 components (Gao et al., 2019; Gentile & Jansma, 2012; Pitcher et al., 2011; Sadeh et al., 2010; Puce et al., 1995). To more accurately establish these neural sources, future studies should implement more technologically advanced neuroimaging. Although the time resolution of the ERP techniques used in this study is excellent, its spatial resolution is poor (Cichy & Oliva, 2020; Huster et al., 2012). Therefore, the temporal dynamics of the neural system related to face and body personal identification should be investigated using additional neuroimaging techniques that have excellent spatial resolution, such as fMRI (Cichy & Oliva, 2020; Huster et al., 2012). According to Takeda, Suzuki, Kawato, and Yamashita (2019), variational Bayesian multimodal encephalography (VBMEG) is a MATLAB toolbox that estimates the distributed source currents from EEG data by integrating fMRI. Complementary analysis of ERP and fMRI data using VBMEG (Takeda et al., 2019) may also be useful to clarify the spatiotemporal processing of the human brain related to face and body personal identification. For example, it is hypothesized that face areas of the ATL may serve as the final stages of face recognition and are ideally suited to serve as an interface between face perception and face memory, linking perceptual representations of individual identity (Collins & Olson, 2014). Previous studies have pointed out that the N250 component reflects the association of face structural encoding, relating the N170 component to representations of faces stored in memory (Caharel & Rossion, 2021; Sommer et al., 2021). In the later stages of face perception processing, when both models (Pitcher et al., 2011; Haxby et al., 2000) assume that there are interactions with various regions in the human brain, it is possible that the N250 component is sensitive to face and body personal identification, reflecting an interaction between the FFA, FBA, and face areas of the ATL. Analysis of VBMEG (Takeda et al., 2019) revealed the interaction between the FFA, FBA, and face areas of the ATL. Moreover, by modeling cortical connectivity from EEG activity (Maffei & Sessa, 2021a) and magnetoencephalography recordings (Maffei & Sessa, 2021b) during the presentation of face images, these previous studies showed that the whole-brain network topology becomes efficient and complex in response, in the same time window in which the face-sensitive N170 is observed. To clarify details of the temporal perception processing in face and body personal identification, a future direction of this research should be to analyze VBMEG (Takeda et al., 2019) and the whole-brain network topology (Maffei & Sessa, 2021a, 2021b).

Many studies investigating the time course of individual neural face recognition processing in the human brain have focused on ERP components as specific temporal landmarks. However, Nemrodov, Niemeier, Mok, and Nestor (2016) conducted a pattern analysis of spatiotemporal ERP signals in any single electrode and across multiple electrodes related to individual face recognition and showed that it moved beyond the leading theoretical and methodological frameworks of ERP data. Nemrodov and colleagues (2016) confirmed the significance of traditional ERP components in face processing and supported the idea that the temporal processing of face recognition is incompletely described by such components. For example, Nemrodov and colleagues (2016) showed that signals arising at 70 msec following face presentation were related to individual face recognition processing. In addition, Nemrodov and colleagues (2018) used spatiotemporal EEG information to determine the neural correlates of facial identity representations and showed that multiple temporal intervals of EEG supported facial identity classification. Facial identity classification peaks were identified in the proximity of the N170 and N250 ERP components (Nemrodov et al., 2018). This study used ERP techniques only; therefore, a pattern analysis of spatiotemporal ERP signals (Nemrodov et al., 2016) and spatiotemporal EEG information (Nemrodov et al., 2018) are recommended for future studies to clarify the temporal dynamics of the neural system related to face and body personal identification.

Conclusion

The present results showed that the P1 amplitude for the face–face block was significantly greater than that for the body–body block and that the N170 amplitude in the right hemisphere was greater for the different person of the same sex condition than for the same person condition. Moreover, the present results showed that the N250 amplitude gradually increased with increasing degrees of face and body sex-social categorization (i.e., same person condition > different person of the same sex condition > different person of the opposite sex condition). However, the N170 and N250 amplitudes were not significantly different in the three blocks. Therefore, sex effects of the N170 amplitude and sex-social effects of the N250 amplitude were observed for faces as target stimuli (face perception) and for bodies as target stimuli (body perception) in this study.

The present results demonstrating that the three ERP components (P1, N170, and N250) reflect temporal perception processing in face and body personal identification partly support the cognitive neuroscience model of the neural system underlying face and body perception (Hu et al., 2020). These results suggest that the first half of the early stages of face and body perception (P1 amplitude) processes the face and body separately and that the second half of the early stages (N170 amplitude reflected structural encoding of the face and body) and the later stages (N250 amplitude reflected face and body personal identification) of face and body perception processes the face and body collaboratively.

The present study was supported by Ayumi Muramoto, Yutarou Sonoda, and students of the Department of Psychology, Faculty of Psychology, Otemon Gakuin University. It reanalyzed data from their undergraduate thesis.

Corresponding author: Hideaki Tanaka, Otemon Gakuin University, 2–1-15 Nishiai, Ibaraki, Osaka 567–8502, Japan, or via e-mail: [email protected].

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Hideaki Tanaka: Conceptualization; Data curation; Formal analysis; Funding acquisition; Investigation; Methodology; Project administration; Resources; Software; Supervision; Validation; Visualization; Writing—Original draft; Writing—Review & editing. Peilun Jiang: Data curation; Formal analysis; Investigation; Methodology; Project administration; Software; Supervision; Validation; Visualization; Writing—Original draft; Writing—Review & editing.

This work was supported by research grants from Otemon Gakuin University. These research grants were used for the revision of this article by a native English speaker.

Retrospective analysis of the citations in every article published in this journal from 2010 to 2021 reveals a persistent pattern of gender imbalance: Although the proportions of authorship teams (categorized by estimated gender identification of first author/last author) publishing in the Journal of Cognitive Neuroscience (JoCN) during this period were M(an)/M = .407, W(oman)/M = .32, M/W = .115, and W/W = .159, the comparable proportions for the articles that these authorship teams cited were M/M = .549, W/M = .257, M/W = .109, and W/W = .085 (Postle and Fulvio, JoCN, 34:1, pp. 1–3). Consequently, JoCN encourages all authors to consider gender balance explicitly when selecting which articles to cite and gives them the opportunity to report their article's gender citation balance.

Abreu
,
A. L.
,
Fernández-Aguilar
,
L.
,
Ferreira-Santos
,
F.
, &
Fernandes
,
C.
(
2023
).
Increased N250 elicited by facial familiarity: An ERP study including the face inversion effect and facial emotion processing
.
Neuropsychologia
,
188
,
108623
. ,
[PubMed]
Allison
,
T.
,
Puce
,
A.
,
Spencer
,
D. D.
, &
McCarthy
,
G.
(
1999
).
Electrophysiological studies of human face perception. I: Potentials generated in occipitotemporal cortex by face and non-face stimuli
.
Cerebral Cortex
,
9
,
415
430
. ,
[PubMed]
Alzueta
,
E.
,
Melcón
,
M.
,
Poch
,
C.
, &
Capilla
,
A.
(
2019
).
Is your own face more than a highly familiar face?
Biological Psychology
,
142
,
100
107
. ,
[PubMed]
Amihai
,
I.
,
Deouell
,
L. Y.
, &
Bentin
,
S.
(
2011
).
Neural adaptation is related to face repetition irrespective of identity: A reappraisal of the N170 effect
.
Experimental Brain Research
,
209
,
193
204
. ,
[PubMed]
Barbeau
,
E. J.
,
Taylor
,
M. J.
,
Regis
,
J.
,
Marquis
,
P.
,
Chauvel
,
P.
, &
Liégeois-Chauvel
,
C.
(
2008
).
Spatio temporal dynamics of face recognition
.
Cerebral Cortex
,
18
,
997
1009
. ,
[PubMed]
Bentin
,
S.
,
Allison
,
T.
,
Puce
,
A.
,
Perez
,
E.
, &
MacCarthy
,
G.
(
1996
).
Electrophysiological studies of face perception in humans
.
Journal of Cognitive Neuroscience
,
8
,
551
565
. ,
[PubMed]
Bruce
,
V.
, &
Valentine
,
T.
(
1985
).
Identity priming in the recognition of familiar faces
.
British Journal of Psychology
,
76
,
373
383
. ,
[PubMed]
Brunet
,
N. M.
(
2023
).
Face processing and early event-related potentials: Replications and novel findings
.
Frontiers in Human Neuroscience
,
17
,
1268972
. ,
[PubMed]
Burton
,
A. M.
,
Wilson
,
S.
,
Cowan
,
M.
, &
Bruce
,
V.
(
1999
).
Face recognition in poor-quality video: Evidence from security surveillance
.
Psychological Science
,
10
,
243
248
.
Butler
,
D. L.
,
Mattingley
,
J. B.
,
Cunnington
,
R.
, &
Suddendorf
,
T.
(
2013
).
Different neural processes accompany self-recognition in photographs across the lifespan: An ERP study using dizygotic twins
.
PLoS One
,
8
,
e72586
. ,
[PubMed]
Cabral
,
H. J.
(
2008
).
Multiple comparisons procedures
.
Circulation
,
117
,
698
701
. ,
[PubMed]
Caharel
,
S.
,
Collet
,
K.
, &
Rossion
,
B.
(
2015
).
The early visual encoding of a face (N170) is viewpoint-dependent: A parametric ERP-adaptation study
.
Biological Psychology
,
106
,
18
27
. ,
[PubMed]
Caharel
,
S.
, &
Rossion
,
B.
(
2021
).
The N170 is sensitive to long-term (personal) familiarity of a face identity
.
Neuroscience
,
458
,
244
255
. ,
[PubMed]
Cao
,
X.
,
Ma
,
X.
, &
Qi
,
C.
(
2015
).
N170 adaptation effect for repeated faces and words
.
Neuroscience
,
294
,
21
28
. ,
[PubMed]
Cassidy
,
S. M.
,
Robertson
,
I. H.
, &
O'Connell
,
R. G.
(
2012
).
Retest reliability of event-related potentials: Evidence from a variety of paradigms
.
Psychophysiology
,
49
,
659
664
. ,
[PubMed]
Cichy
,
R. M.
, &
Oliva
,
A.
(
2020
).
A M/EEG-fMRI fusion primer: Resolving human brain responses in space and time
.
Neuron
,
107
,
772
781
. ,
[PubMed]
Collins
,
J. A.
, &
Olson
,
I. R.
(
2014
).
Beyond the FFA: The role of the ventral anterior temporal lobes in face processing
.
Neuropsychologia
,
61
,
65
79
. ,
[PubMed]
Delorme
,
A.
, &
Makeig
,
S.
(
2004
).
EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis
.
Journal of Neuroscience Methods
,
134
,
9
21
. ,
[PubMed]
Downing
,
P. E.
,
Jiang
,
Y.
,
Shuman
,
M.
, &
Kanwisher
,
N.
(
2001
).
A cortical area selective for visual processing of the human body
.
Science
,
293
,
2470
2473
. ,
[PubMed]
Eimer
,
M.
(
2000
).
The face-specific N170 component reflects late stages in the structural encoding of faces
.
NeuroReport
,
11
,
2319
2324
. ,
[PubMed]
Eimer
,
M.
(
2011
).
The face-sensitive N170 component of the event-related brain potential
. In
A.
Calder
,
G.
Rhodes
,
M. H.
Johnson
, &
J. V.
Haxby
(Eds.),
The Oxford handbook of face perception
(pp.
329
344
).
Oxford
:
Oxford University Press
.
Eimer
,
M.
,
Kiss
,
M.
, &
Nicholas
,
S.
(
2010
).
Response profile of the face-sensitive N170 component: A rapid adaptation study
.
Cerebral Cortex
,
20
,
2442
2452
. ,
[PubMed]
Feng
,
C.
,
Luo
,
Y.
, &
Fu
,
S.
(
2013
).
The category-sensitive and orientation-sensitive N170 adaptation in faces revealed by comparison with Chinese characters
.
Psychophysiology
,
50
,
885
899
. ,
[PubMed]
Foster
,
C.
,
Zhao
,
M.
,
Bolkart
,
T.
,
Black
,
M. J.
,
Bartels
,
A.
, &
Bülthoff
,
I.
(
2021
).
Separated and overlapping neural coding of face and body identity
.
Human Brain Mapping
,
42
,
4242
4260
. ,
[PubMed]
Fu
,
S.
,
Feng
,
C.
,
Guo
,
S.
,
Luo
,
Y.
, &
Parasuraman
,
R.
(
2012
).
Neural adaptation provides evidence for categorical differences in processing of faces and Chinese characters: An ERP study of the N170
.
PLoS One
,
7
,
e41103
. ,
[PubMed]
Gao
,
C.
,
Conte
,
S.
,
Richards
,
J. E.
,
Xie
,
W.
, &
Hanayik
,
T.
(
2019
).
The neural sources of N170: Understanding timing of activation in face-selective areas
.
Psychophysiology
,
56
,
e13336
. ,
[PubMed]
Gauthier
,
I.
,
Skudlarski
,
P.
,
Gore
,
J. C.
, &
Anderson
,
A. W.
(
2000
).
Expertise for cars and birds recruits brain areas involved in face recognition
.
Nature Neuroscience
,
3
,
191
197
. ,
[PubMed]
Gauthier
,
I.
,
Tarr
,
M. J.
,
Moylan
,
J.
,
Skudlarski
,
P.
,
Gore
,
J. C.
, &
Anderson
,
A. W.
(
2000
).
The fusiform “face area” is part of a network that processes faces at the individual level
.
Journal of Cognitive Neuroscience
,
12
,
495
504
. ,
[PubMed]
Gentile
,
F.
, &
Jansma
,
B. M.
(
2012
).
Temporal dynamics of face selection mechanism in the context of similar and dissimilar faces: ERP evidence for biased competition within the ventral occipito-temporal cortex using ICA
.
Neuroimage
,
59
,
682
694
. ,
[PubMed]
Ghuman
,
A. S.
,
Brunet
,
N. M.
,
Li
,
Y.
,
Konecky
,
R. O.
,
Pyles
,
J. A.
,
Walls
,
S. A.
, et al
(
2014
).
Dynamic encoding of face information in the human fusiform gyrus
.
Nature Communications
,
5
,
5672
. ,
[PubMed]
Harry
,
B. B.
,
Umla-Runge
,
K.
,
Lawrence
,
A. D.
,
Graham
,
K. S.
, &
Downing
,
P. E.
(
2016
).
Evidence for integrated visual face and body representations in the anterior temporal lobes
.
Journal of Cognitive Neuroscience
,
28
,
1178
1193
. ,
[PubMed]
Haxby
,
J. V.
,
Hoffman
,
E. A.
, &
Gobbini
,
M. I.
(
2000
).
The distributed human neural system for face perception
.
Trends in Cognitive Sciences
,
4
,
223
233
. ,
[PubMed]
Heisz
,
J. J.
,
Watter
,
S.
, &
Shedden
,
J. M.
(
2006
).
Automatic face identity encoding at the N170
.
Vision Research
,
46
,
4604
4614
. ,
[PubMed]
Hietanen
,
J. K.
,
Kirjavainen
,
I.
, &
Nummenmaa
,
L.
(
2014
).
Additive effects of affective arousal and top–down attention on the event-related brain responses to human bodies
.
Biological Psychology
,
103
,
167
175
. ,
[PubMed]
Hillyard
,
S. A.
,
Teder-Sälejärvi
,
W. A.
, &
Münte
,
T. F.
(
1998
).
Temporal dynamics of early perceptual processing
.
Current Opinion in Neurobiology
,
8
,
202
210
. ,
[PubMed]
Hoffman
,
E. A.
, &
Haxby
,
J. V.
(
2000
).
Distinct representations of eye gaze and identity in the distributed human neural system for face perception
.
Nature Neuroscience
,
3
,
80
84
. ,
[PubMed]
Holzleitner
,
I. J.
,
Hunter
,
D. W.
,
Tiddeman
,
B. P.
,
Seck
,
A.
,
Re
,
D. E.
, &
Perrett
,
D. I.
(
2014
).
Men's facial masculinity: When (body) size matters
.
Perception
,
43
,
1191
1202
. ,
[PubMed]
Hu
,
Y.
,
Baragchizadeh
,
A.
, &
O'Toole
,
A. J.
(
2020
).
Integrating faces and bodies: Psychological and neural perspectives on whole person perception
.
Neuroscience & Biobehavioral Reviews
,
112
,
472
486
. ,
[PubMed]
Huster
,
R. J.
,
Debener
,
S.
,
Eichele
,
T.
, &
Herrmann
,
C. S.
(
2012
).
Methods for simultaneous EEG-fMRI: An introductory review
.
Journal of Neuroscience
,
32
,
6053
6060
. ,
[PubMed]
[PubMed]
Itier
,
R. J.
, &
Taylor
,
M. J.
(
2004
).
N170 or N1? Spatiotemporal differences between object and face processing using ERPs
.
Cerebral Cortex
,
14
,
132
142
. ,
[PubMed]
Jenkins
,
R.
,
Dowsett
,
A. J.
, &
Burton
,
A. M.
(
2018
).
How many faces do people know?
Proceedings of the Royal Society of London, Series B: Biological Sciences
,
285
,
20181319
. ,
[PubMed]
Jurcak
,
V.
,
Tsuzuki
,
D.
, &
Dan
,
I.
(
2007
).
10/20, 10/10, and 10/5 systems revisited: Their validity as relative head-surface-based positioning systems
.
Neuroimage
,
34
,
1600
1611
. ,
[PubMed]
Kanwisher
,
N.
(
2006
).
Neuroscience. What's in a face?
Science
,
311
,
617
618
. ,
[PubMed]
Kanwisher
,
N.
,
McDermott
,
J.
, &
Chun
,
M. M.
(
1997
).
The fusiform face area: A module in human extrastriate cortex specialized for face perception
.
Journal of Neuroscience
,
17
,
4302
4311
. ,
[PubMed]
Kanwisher
,
N.
,
Tong
,
F.
, &
Nakayama
,
K.
(
1998
).
The effect of face inversion on the human fusiform face area
.
Cognition
,
68
,
B1
B11
. ,
[PubMed]
Kloth
,
N.
,
Schweinberger
,
S. R.
, &
Kovács
,
G.
(
2010
).
Neural correlates of generic versus gender-specific face adaptation
.
Journal of Cognitive Neuroscience
,
22
,
2345
2356
. ,
[PubMed]
Kovács
,
G.
,
Zimmer
,
M.
,
Bankó
,
E.
,
Harza
,
I.
,
Antal
,
A.
, &
Vidnyánszky
,
Z.
(
2006
).
Electrophysiological correlates of visual adaptation to faces and body parts in humans
.
Cerebral Cortex
,
16
,
742
753
. ,
[PubMed]
Li
,
X.
(
2021
).
Recognition characteristics of facial and bodily expressions: Evidence from ERPs
.
Frontiers in Psychology
,
12
,
680959
. ,
[PubMed]
Maffei
,
A.
, &
Sessa
,
P.
(
2021a
).
Event-related network changes unfold the dynamics of cortical integration during face processing
.
Psychophysiology
,
58
,
e13786
. ,
[PubMed]
Maffei
,
A.
, &
Sessa
,
P.
(
2021b
).
Time-resolved connectivity reveals the “how” and “when” of brain networks reconfiguration during face processing
.
Neuroimage: Reports
,
1
,
100022
.
Maurer
,
D.
,
Le Grand
,
R.
, &
Mondloch
,
C. J.
(
2002
).
The many faces of configural processing
.
Trends in Cognitive Sciences
,
6
,
255
260
. ,
[PubMed]
McCarthy
,
G.
,
Puce
,
A.
,
Belger
,
A.
, &
Allison
,
T.
(
1999
).
Electrophysiological studies of human face perception. II: Response properties of face-specific potentials generated in occipitotemporal cortex
.
Cerebral Cortex
,
9
,
431
444
. ,
[PubMed]
Minnebusch
,
D. A.
, &
Daum
,
I.
(
2009
).
Neuropsychological mechanisms of visual face and body perception
.
Neuroscience & Biobehavioral Reviews
,
33
,
1133
1144
. ,
[PubMed]
Miyakoshi
,
M.
,
Kanayama
,
N.
,
Nomaura
,
M.
,
Iidaka
,
T.
, &
Ohira
,
H.
(
2008
).
ERP study of viewpoint-independence in familiar-face recognition
.
International Journal of Psychophysiology
,
69
,
119
126
. ,
[PubMed]
Mouchetant-Rostaing
,
Y.
, &
Giard
,
M. H.
(
2003
).
Electrophysiological correlates of age and gender perception on human faces
.
Journal of Cognitive Neuroscience
,
15
,
900
910
. ,
[PubMed]
Mouchetant-Rostaing
,
Y.
,
Giard
,
M. H.
,
Bentin
,
S.
,
Aguera
,
P. E.
, &
Pernier
,
J.
(
2000
).
Neurophysiological correlates of face gender processing in humans
.
European Journal of Neuroscience
,
12
,
303
310
. ,
[PubMed]
Nemrodov
,
D.
,
Niemeier
,
M.
,
Mok
,
J. N. Y.
, &
Nestor
,
A.
(
2016
).
The time course of individual face recognition: A pattern analysis of ERP signals
.
Neuroimage
,
132
,
469
476
. ,
[PubMed]
Nemrodov
,
D.
,
Niemeier
,
M.
,
Patel
,
A.
, &
Nestor
,
A.
(
2018
).
The neural dynamics of facial identity processing: Insights from EEG-based pattern analysis and image reconstruction
.
eNeuro
,
5
, ENEURO.0358-17.2018. ,
[PubMed]
Nihei
,
Y.
,
Minami
,
T.
, &
Nakauchi
,
S.
(
2018
).
Brain activity related to the judgment of face-likeness: Correlation between EEG and face-like evaluation
.
Frontiers in Human Neuroscience
,
12
,
56
. ,
[PubMed]
Oostenveld
,
R.
, &
Praamstra
,
P.
(
2001
).
The five percent electrode system for high-resolution EEG and ERP measurements
.
Clinical Neurophysiology
,
112
,
713
719
. ,
[PubMed]
O'Toole
,
A. J.
,
Phillips
,
P. J.
,
Weimer
,
S.
,
Roark
,
D. A.
,
Ayyad
,
J.
,
Barwick
,
R.
, et al
(
2011
).
Recognizing people from dynamic and static faces and bodies: Dissecting identity with a fusion approach
.
Vision Research
,
51
,
74
83
. ,
[PubMed]
Parente
,
R.
, &
Tommasi
,
L.
(
2008
).
A bias for the female face in the right hemisphere
.
Laterality
,
13
,
374
386
. ,
[PubMed]
Peelen
,
M. V.
, &
Downing
,
P. E.
(
2005
).
Selectivity for the human body in the fusiform gyrus
.
Journal of Neurophysiology
,
93
,
603
608
. ,
[PubMed]
Peelen
,
M. V.
, &
Downing
,
P. E.
(
2007
).
The neural basis of visual body perception
.
Nature Reviews Neuroscience
,
8
,
636
648
. ,
[PubMed]
Pitcher
,
D.
,
Walsh
,
V.
, &
Duchaine
,
B.
(
2011
).
The role of the occipital face area in the cortical face perception network
.
Experimental Brain Research
,
209
,
481
493
. ,
[PubMed]
Popova
,
T.
, &
Wiese
,
H.
(
2023
).
How quickly do we learn new faces in everyday life? Neurophysiological evidence for face identity learning after a brief real-life encounter
.
Cortex
,
159
,
205
216
. ,
[PubMed]
Proverbio
,
A. M.
,
Ornaghi
,
L.
, &
Gabaro
,
V.
(
2018
).
How face blurring affects body language processing of static gestures in women and men
.
Social Cognitive and Affective Neuroscience
,
13
,
590
603
. ,
[PubMed]
Puce
,
A.
,
Allison
,
T.
,
Asgari
,
M.
,
Gore
,
J. C.
, &
McCarthy
,
G.
(
1996
).
Differential sensitivity of human visual cortex to faces, letterstrings, and textures: A functional magnetic resonance imaging study
.
Journal of Neuroscience
,
16
,
5205
5215
. ,
[PubMed]
Puce
,
A.
,
Allison
,
T.
,
Bentin
,
S.
,
Gore
,
J. C.
, &
McCarthy
,
G.
(
1998
).
Temporal cortex activation in humans viewing eye and mouth movements
.
Journal of Neuroscience
,
18
,
2188
2199
. ,
[PubMed]
Puce
,
A.
,
Allison
,
T.
,
Gore
,
J. C.
, &
MacCarthy
,
G.
(
1995
).
Face-sensitive regions in human extrastriate cortex studied by functional MRI
.
Journal of Neurophysiology
,
74
,
1192
1199
. ,
[PubMed]
Puce
,
A.
,
Allison
,
T.
, &
McCarthy
,
G.
(
1999
).
Electrophysiological studies of human face perception. III: Effects of top–down processing on face-specific potentials
.
Cerebral Cortex
,
9
,
445
458
. ,
[PubMed]
Rice
,
A.
,
Phillips
,
P. J.
,
Natu
,
V.
,
An
,
X.
, &
O'Toole
,
A. J.
(
2013
).
Unaware person recognition from the body when face identification fails
.
Psychological Science
,
24
,
2235
2243
. ,
[PubMed]
Rollins
,
L.
,
Olsen
,
A.
, &
Evans
,
M.
(
2020
).
Social categorization modulates own-age bias in face recognition and ERP correlates of face processing
.
Neuropsychologia
,
141
,
107417
. ,
[PubMed]
Rossion
,
B.
, &
Caharel
,
S.
(
2011
).
ERP evidence for the speed of face categorization in the human brain: Disentangling the contribution of low-level visual cues from face perception
.
Vision Research
,
51
,
1297
1311
. ,
[PubMed]
Rubianes
,
M.
,
Muñoz
,
F.
,
Casado
,
P.
,
Hernández-Gutiérrez
,
D.
,
Jiménez-Ortega
,
L.
,
Fondevila
,
S.
, et al
(
2021
).
Am I the same person across my life span? An event-related brain potentials study of the temporal perspective in self-identity
.
Psychophysiology
,
58
,
e13692
. ,
[PubMed]
Sadeh
,
B.
,
Podlipsky
,
I.
,
Zhdanov
,
A.
, &
Yovel
,
G.
(
2010
).
Event-related potential and functional MRI measures of face-selectivity are highly correlated: A simultaneous ERP-fMRI investigation
.
Human Brain Mapping
,
31
,
1490
1501
. ,
[PubMed]
Schroeger
,
A.
,
Ficco
,
L.
,
Wuttke
,
S. J.
,
Kaufmann
,
J. M.
, &
Schweinberger
,
S. R.
(
2023
).
Differences between high and low performers in face recognition in electrophysiological correlates of face familiarity and distance-to-norm
.
Biological Psychology
,
182
,
108654
. ,
[PubMed]
Schwarzlose
,
R. F.
,
Baker
,
C. I.
, &
Kanwisher
,
N.
(
2005
).
Separate face and body selectivity on the fusiform gyrus
.
Journal of Neuroscience
,
25
,
11055
11059
. ,
[PubMed]
Schweinberger
,
S. R.
,
Huddy
,
V.
, &
Burton
,
A. M.
(
2004
).
N250r: A face-selective brain response to stimulus repetitions
.
NeuroReport
,
15
,
1501
1505
. ,
[PubMed]
Schweinberger
,
S. R.
, &
Neumann
,
M. F.
(
2016
).
Repetition effects in human ERPs to faces
.
Cortex
,
80
,
141
153
. ,
[PubMed]
Schweinberger
,
S. R.
,
Pickering
,
E. C.
,
Jentzsch
,
I.
,
Burton
,
A. M.
, &
Kaufmann
,
J. M.
(
2002
).
Event-related brain potential evidence for a response of inferior temporal cortex to familiar face repetitions
.
Cognitive Brain Research
,
14
,
398
409
. ,
[PubMed]
Sommer
,
W.
,
Kotowski
,
K.
,
Shi
,
Y.
,
Switonski
,
A.
,
Hildebrandt
,
A.
, &
Stapor
,
K.
(
2023
).
Explicit face memory abilities are positively related to the non-intentional encoding of faces: Behavioral and ERP evidence
.
Biological Psychology
,
183
,
108672
. ,
[PubMed]
Sommer
,
W.
,
Stapor
,
K.
,
Kończak
,
G.
,
Kotowski
,
K.
,
Fabian
,
P.
,
Ochab
,
J.
, et al
(
2021
).
The N250 event-related potential as an index of face familiarity: A replication study
.
Royal Society Open Science
,
8
,
202356
. ,
[PubMed]
Stahl
,
J.
,
Wiese
,
H.
, &
Schweinberger
,
S. R.
(
2010
).
Learning task affects ERP-correlates of the own-race bias, but not recognition memory performance
.
Neuropsychologia
,
48
,
2027
2040
. ,
[PubMed]
Stekelenburg
,
J. J.
, &
de Gelder
,
B.
(
2004
).
The neural correlates of perceiving human bodies: An ERP study on the body-inversion effect
.
NeuroReport
,
15
,
777
780
. ,
[PubMed]
Takeda
,
Y.
,
Suzuki
,
K.
,
Kawato
,
M.
, &
Yamashita
,
O.
(
2019
).
MEG source imaging and group analysis using VBMEG
.
Frontiers in Neuroscience
,
13
,
241
. ,
[PubMed]
Tanaka
,
H.
(
2016
).
Facial cosmetics exert a greater influence on processing of the mouth relative to the eyes: Evidence from the N170 event-related potential component
.
Frontiers in Psychology
,
7
,
1359
.
Tanaka
,
H.
(
2018a
).
Face-sensitive P1 and N170 components are related to the perception of two-dimensional and three-dimensional objects
.
NeuroReport
,
29
,
583
587
. ,
[PubMed]
Tanaka
,
H.
(
2018b
).
Length of hair affects P1 and N170 latencies for perception of women's faces
.
Perceptual and Motor Skills
,
125
,
1011
1028
. ,
[PubMed]
Tanaka
,
H.
(
2020
).
Mental rotation of alphabet characters affects the face-sensitive N170 component
.
NeuroReport
,
31
,
897
901
. ,
[PubMed]
Tanaka
,
H.
(
2021
).
Lip color affects ERP components in temporal face perception processing
.
Journal of Integrative Neuroscience
,
20
,
1029
1038
. ,
[PubMed]
Tanaka
,
J. W.
,
Curran
,
T.
,
Porterfield
,
A. L.
, &
Collins
,
D.
(
2006
).
Activation of preexisting and acquired face representations: The N250 event-related potential as an index of face familiarity
.
Journal of Cognitive Neuroscience
,
18
,
1488
1497
. ,
[PubMed]
Taylor
,
J. C.
,
Wiggett
,
A. J.
, &
Downing
,
P. E.
(
2007
).
Functional MRI analysis of body and body part representations in the extrastriate and fusiform body areas
.
Journal of Neurophysiology
,
98
,
1626
1633
. ,
[PubMed]
Thierry
,
G.
,
Pegna
,
A. J.
,
Dodds
,
C.
,
Roberts
,
M.
,
Basan
,
S.
, &
Downing
,
P.
(
2006
).
An event-related potential component sensitive to images of the human body
.
Neuroimage
,
32
,
871
879
. ,
[PubMed]
Tsantani
,
M.
,
Kriegeskorte
,
N.
,
Storrs
,
K.
,
Williams
,
A. L.
,
McGettigan
,
C.
, &
Garrido
,
L.
(
2021
).
FFA and OFA encode distinct types of face identity information
.
Journal of Neuroscience
,
41
,
1952
1969
. ,
[PubMed]
Volfart
,
A.
,
Yan
,
X.
,
Maillard
,
L.
,
Colnat-Coulbois
,
S.
,
Hossu
,
G.
,
Rossion
,
B.
, et al
(
2022
).
Intracerebral electrical stimulation of the right anterior fusiform gyrus impairs human face identity recognition
.
Neuroimage
,
250
,
118932
. ,
[PubMed]
Werheid
,
K.
,
Schacht
,
A.
, &
Sommer
,
W.
(
2007
).
Facial attractiveness modulates early and late event-related brain potentials
.
Biological Psychology
,
76
,
100
108
. ,
[PubMed]
Wicker
,
B.
,
Michel
,
F.
,
Henaff
,
M. A.
, &
Decety
,
J.
(
1998
).
Brain regions involved in the perception of gaze: A PET study
.
Neuroimage
,
8
,
221
227
. ,
[PubMed]
Wiese
,
H.
,
Kaufmann
,
J. M.
, &
Schweinberger
,
S. R.
(
2014
).
The neural signature of the own-race bias: Evidence from event-related potentials
.
Cerebral Cortex
,
24
,
826
835
. ,
[PubMed]
Wojciulik
,
E.
,
Kanwisher
,
N.
, &
Driver
,
J.
(
1998
).
Covert visual attention modulates face-specific activity in the human fusiform gyrus: fMRI study
.
Journal of Neurophysiology
,
79
,
1574
1578
. ,
[PubMed]
Zhao
,
Y.
,
Zhen
,
Z.
,
Liu
,
X.
,
Song
,
Y.
, &
Liu
,
J.
(
2018
).
The neural network for face recognition: Insights from an fMRI study on developmental prosopagnosia
.
Neuroimage
,
169
,
151
161
. ,
[PubMed]
Zheng
,
X.
,
Mondloch
,
C. J.
,
Nishimura
,
M.
,
Vida
,
M. D.
, &
Segalowitz
,
S. J.
(
2011
).
Telling one face from another: Electrocortical correlates of facial characteristics among individual female faces
.
Neuropsychologia
,
49
,
3254
3264
. ,
[PubMed]
Zheng
,
X.
,
Mondloch
,
C. J.
, &
Segalowitz
,
S. J.
(
2012
).
The timing of individual face recognition in the brain
.
Neuropsychologia
,
50
,
1451
1461
. ,
[PubMed]
Zimmer
,
M.
, &
Kovács
,
G.
(
2011
).
Electrophysiological correlates of face distortion after-effects
.
Quarterly Journal of Experimental Psychology
,
64
,
533
544
. ,
[PubMed]
Zimmer
,
M.
,
Zbanţ
,
A.
,
Németh
,
K.
, &
Kovács
,
G.
(
2015
).
Adaptation duration dissociates category-, image-, and person-specific processes on face-evoked event-related potentials
.
Frontiers in Psychology
,
6
,
1945
. ,
[PubMed]
Zimmermann
,
K. M.
,
Stratil
,
A.-S.
,
Thome
,
I.
,
Sommer
,
J.
, &
Jansen
,
A.
(
2019
).
Illusory face detection in pure noise images: The role of interindividual variability in fMRI activation patterns
.
PLoS One
,
14
,
e0209310
. ,
[PubMed]
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.