Abstract

Individual differences in perceiving, learning, and recognizing faces, summarized under the term face cognition, have been shown on the behavioral and brain level, but connections between these levels have rarely been made. We used ERPs in structural equation models to determine the contributions of neurocognitive processes to individual differences in the accuracy and speed of face cognition as established by Wilhelm, Herzmann, Kunina, Danthiir, Schacht, and Sommer [Individual differences in face cognition, in press]. For 85 participants, we measured several ERP components and, in independent tasks and sessions, assessed face cognition abilities and other cognitive abilities, including immediate and delayed memory, mental speed, general cognitive ability, and object cognition. Individual differences in face cognition were unrelated to domain-general visual processes (P100) and to processes involved with memory encoding (Dm component). The ability of face cognition accuracy was moderately related to neurocognitive indicators of structural face encoding (latency of the N170) and of activating representations of both faces and person-related knowledge (latencies and amplitudes of the early and late repetition effects, ERE/N250 and LRE/N400, respectively). The ability of face cognition speed was moderately related to the amplitudes of the ERE and LRE. Thus, a substantial part of individual differences in face cognition is explained by the speed and efficiency of activating memory representations of faces and person-related knowledge. These relationships are not moderated by individual differences in established cognitive abilities.

INTRODUCTION

People vary in their skills to perceive, learn, and recognize faces. These variations range from superior face recognition (Russell, Duchaine, & Nakayama, 2007) in people who “never forget a face” to severe impairments in people with prosopagnosia (Duchaine & Nakayama, 2005). Individual differences in face cognition have been shown on the behavioral level and in brain structure and activation (Alexander et al., 1999; Clark et al., 1996). Relationships between individual differences in indicators of neural activity and in individual differences in behavioral performance can reveal neural processes which underlie individual variations both on a functional level and on the level of particular brain systems. Some studies have attempted to establish such brain–behavior relationships across individuals (Rotshtein, Geng, Driver, & Dolan, 2007; Schretlen, Pearlson, Anthony, & Yates, 2001), but they only considered single tasks or single neurocognitive processes. In the present study, we used ERPs to estimate relationships between neurocognitive and behavioral indicators of the normal variability in face cognition and in established cognitive abilities.

Wilhelm et al. (in press) provided evidence that performance in several face cognition tasks (see Appendix and Herzmann, Danthiir, Schacht, Sommer, & Wilhelm, 2008) can be attributed to individual differences in three separable abilities: accuracy of face perception, accuracy of face memory, and speed of face cognition (cf. Figure 2). Face perception is the ability to perceive faces and to discern information such as facial features and their configuration. Face memory is the ability to learn faces and to retrieve them from long-term memory. The speed of face cognition is the ability to perceive, learn, and recognize faces quickly. Wilhelm et al. showed that face cognition abilities were dissociated from one another and also from immediate and delayed memory, mental speed, general cognitive ability, and object cognition. Structural equation modeling revealed that none of the three abilities of face cognition could be reduced to these established abilities. Face cognition therefore represents a set of distinct mental abilities in their own right.

Having established the structure of individual differences in face cognition on the behavioral level, individual differences in neurocognitive processes associated with vision or face cognition could then be linked to these abilities. We used ERP components as indicators of neurocognitive processes. The particular strength of ERP components is their high temporal resolution, which allows successive or even parallel activations of distinct brain systems to be measured over time. Relationships between specific ERP components and individual differences in face cognition abilities would provide fine-grained evidence for neurocognitive processes underlying individual differences on the performance level. Face cognition abilities can, in turn, be assigned to the specific brain systems to whose activation the ERP components are linked. The success of this endeavor depends on the selection of ERP components that show high construct validity and are known to be related to specific neurocognitive subprocesses. ERP research has identified a variety of ERP components sensitive to specific subprocesses of face cognition. The ERP components used in the present study will be briefly reviewed below.

The occipital P100 reflects processing of domain-general, low-level stimulus features. There is some support for face sensitivity of the P100; modulations in its amplitude and latency were found when face stimuli were inverted (Doi, Sawada, & Masataka, 2007). The P100 is generated in extrastriate visual brain areas (Doi et al., 2007).

The occipito-temporal N170 is related to the configural encoding of facial features and to their integration into a holistic percept (cf. Deffke et al., 2007). The N170 is larger for faces than for other objects (Bentin, Allison, Puce, Perez, & McCarthy, 1996). Although changes in the physical properties of portraits influence its amplitude and latency (Itier & Taylor, 2004), the N170 is largely unaffected by the familiarity of faces (Herzmann, Schweinberger, Sommer, & Jentzsch, 2004). Various brain areas have been suggested as sources of the electrical N170, including fusiform, lingual, or occipito-temporal gyri. Its magnetic counterpart, the M170, has been consistently localized in fusiform gyrus (Deffke et al., 2007).

The so-called difference due to memory (Dm) is measured in the paradigm of subsequent memory. It reflects encoding of items (words or faces) into long-term memory and is predictive of their subsequent recognition (Sommer, Schweinberger, & Matt, 1991). The Dm for faces is independent of fluctuations in attention or perception during memory encoding (Sommer et al., 1991). A broad network of cooperating brain regions, including inferior frontal areas, bilateral medial temporal lobe, and face-responsive regions in fusiform gyrus, has been identified as the neural generator of the Dm (Lehmann et al., 2004; Paller & Wagner, 2002).

The early repetition effect (ERE or N250) is measured in priming experiments. It is related to the activation of structural representations of faces in long-term memory and to the identification of familiar faces. The ERE is more pronounced for familiar as compared to unfamiliar faces (Pfütze, Sommer, & Schweinberger, 2002; Schweinberger, Pickering, Jentzsch, Burton, & Kaufmann, 2002), and it is larger for personally familiar than for famous faces (Herzmann et al., 2004). It is thought to originate in fusiform gyrus (Eger, Schweinberger, Dolan, & Henson, 2005; Schweinberger et al., 2002).

The late repetition effect (LRE or N400) is also measured in priming experiments and reflects the activation of person-related knowledge in long-term memory. The LRE is larger for familiar than for unfamiliar faces (Pfütze et al., 2002; Schweinberger et al., 2002), and for personally familiar than for famous faces (Herzmann et al., 2004). Because the LRE represents the activation of person-related knowledge, it can be assumed to originate in those regions of the extended neural system involved in retrieving that knowledge (i.e., anterior temporal cortex, precuneus, posterior cingulate cortex) as proposed in the model of familiar face recognition by Gobbini and Haxby (2007).

Some previous studies used ERPs or neuroimaging data to specify relationships between neural processing and mental abilities at the level of individual differences (Jolij et al., 2007). In the field of face cognition, such studies are rare (Rotshtein et al., 2007). Only Schretlen et al. (2001) measured neurocognitive and behavioral indicators in independent tasks. Using the Benton Face Task, they tried to identify sociodemographic, cognitive, and neuroanatomical factors in discriminating unfamiliar faces. Although these studies provided some evidence for brain–behavior relationships for isolated tasks and single neurocognitive functions, no study has yet investigated relationships between individual differences in ERP components and face cognition abilities measured in independent tasks.

In this study, we aimed to relate individual differences in face cognition abilities (Wilhelm et al., in press) to several specific neurocognitive subprocesses of vision and face cognition, as reflected in particular ERP components. We used latent variable techniques (Bollen, 1989) to estimate brain–behavior relationships as opposed to the bivariate correlations applied in previous studies.

Our expectations of specific brain–behavior relationships were based on theoretical assumptions about the functional significance of the ERP components that we used. Contributions to face perception were expected from processes reflected in P100 and in N170 because both components are independent of face familiarity. Contributions to face memory were expected from processes reflected in the Dm, ERE, and LRE because these components are related to learning and recognizing faces. For the ERE and LRE, contributions were expected to be higher for learned (familiar) than for unfamiliar faces. Latencies of ERP components were expected to be related to the speed of face cognition. Because this factor subsumes processes of perception, learning, and recognition, all ERP components were supposed to contribute, possibly to varying degrees.

Having estimated relationships between ERP components and face cognition abilities, we aimed to test whether these brain–behavior relationships are specific to face cognition or moderated by the following established cognitive abilities: immediate and delayed memory, mental speed, general cognitive ability, and object cognition. We calculated correlations between ERP components and these cognitive abilities and used structural equation modeling to test for the face specificity of the brain–behavior relationships.

The amount and kind of data obtained in the present study made it possible to address an interesting side aspect: the relationships among ERP components. Their specification would provide evidence for the structure of individual differences in face cognition on the neural level and clarify whether distinct ERP components (e.g., ERE and LRE) represent separable neural subprocesses (e.g., activation of faces vs. activation of person-related knowledge).

The present study involved a subset of 85 individuals taken from the larger sample of 209 healthy young adults of Study 2 by Wilhelm et al. (in press). All participants took part in the behavioral test session. In addition, the subset accomplished two further sessions with EEG recording. In the first session, participants completed the Dm experiment and learned 40 unfamiliar faces in a standardized learning experiment. One week later, in the second session, the learned faces were used as familiar primes and targets in a priming experiment. Experimentally learned faces were used instead of faces of celebrities, friends, or family members in order to control for the amount of perceptual expertise and associated person-related knowledge.

METHODS

The behavioral test study consisted of several tasks that measured face cognition and the four other cognitive abilities. The Appendix briefly describes those tasks relevant for the present study; more detailed information can be found in Wilhelm et al. (in press) and Herzmann et al. (2008).

Participants

Eighty five participants (46 women) aged between 18 and 35 years (M = 25 years and SD = 4.5 years) were paid to contribute to this study. The occupational background of the participants was heterogeneous: 10.6% were high school students, 51.7% were university students with a variety of majors, 21.2% had occupations, and 16.5% were unemployed. According to the Edinburgh Handedness Inventory (Oldfield, 1971), 7 participants were left-handed, 75 right-handed, and 3 ambidextrous. All participants reported normal or corrected-to-normal visual acuity.

Stimuli and Apparatus

All photographs of unfamiliar faces were taken from the same set of black-and-white portraits (Endl et al., 1998). For the Dm experiment, a total of 360 unfamiliar faces served as stimuli. Forty unfamiliar faces were learned for long-term retention. Sixty additional unfamiliar faces were used as distracters. In the priming experiment, 160 unfamiliar faces served as unfamiliar prime or target stimuli in addition to the learned faces. Sexes of faces were represented equally in all stimulus sets. Portraits were chosen with similar luminance, direction of gaze, and distinctiveness. All faces showed neutral or weakly smiling expressions (without exposing teeth) and had no extraneous features such as beards or glasses. All portraits were fitted into a vertical ellipse of 259 by 388 pixels (7.0 by 10.2 cm; 4.0° by 5.8° of visual angle) that extended only up to the hairline (see Figure 1).

Figure 1. 

Trial sequences of the Dm experiment (A and B) and the priming experiment (C). (A) A study block trial of the Dm experiment. (B) A recognition block trial of the Dm experiment. (C) An unprimed trial of the priming experiment.

Figure 1. 

Trial sequences of the Dm experiment (A and B) and the priming experiment (C). (A) A study block trial of the Dm experiment. (B) A recognition block trial of the Dm experiment. (C) An unprimed trial of the priming experiment.

Figure 2. 

Measurement model of face cognition (χ2 = 99, df = 71, RMSEA = .069, CFI = .951, N = 85) for the subset of individuals participating in the EEG sessions. Information on the indicators for each latent factor of face cognition can be found in the Appendix.

Figure 2. 

Measurement model of face cognition (χ2 = 99, df = 71, RMSEA = .069, CFI = .951, N = 85) for the subset of individuals participating in the EEG sessions. Information on the indicators for each latent factor of face cognition can be found in the Appendix.

Procedure

EEG sessions were conducted in the same electrically shielded, sound-attenuated chamber. Stimuli were always shown in the center of a light gray computer monitor at a viewing distance of one meter. Prior to each experiment, participants received written task instructions and took practice trials to become familiar with the procedures. The order of stimuli and assignment of response buttons were kept constant for all participants to ensure comparability of task demands.

The first part of the learning session was the Dm experiment. Four study blocks alternated with recognition blocks. Forty-five faces had to be memorized in each study block. In the subsequent recognition block, they had to be discriminated from 45 new, unfamiliar distracter faces mixed in randomly with the learned faces. Only short breaks were allowed between study and recognition blocks, but longer ones were allowed before new study blocks started.

Each trial in the study blocks (Figure 1A) started with the presentation of a fixation cross for 200 msec, followed by the presentation of a target face for 1 sec. Interstimulus intervals were 2 sec. Participants were instructed to look carefully at the faces and try to remember them for the following recognition block; no overt response was required. Each trial in the recognition blocks (Figure 1B) started with the presentation of a fixation cross for 200 msec, followed by the presentation of a target or a distracter face for 1.5 sec. Then a horizontal 4-point rating scale appeared on the screen below the face. The rating scale consisted of four blue squares labeled with the German words for “unfamiliar,” “rather unfamiliar,” “rather familiar,” and “familiar.” Participants gave familiarity ratings for each face by moving a small red square with the computer mouse into one of the blue boxes and clicking the mouse button. Participants were encouraged to rate the familiarity of each item according to their first impression, but without time limit. The interval between response and the next fixation cross was 1 sec.

After the Dm experiment was completed, the electrode cap was removed and the learning for long-term retention was conducted in a different room according to the procedure described by Herzmann and Sommer (2007). In short, learning started with an introduction block in which each face was shown for 2 sec with an interstimulus interval of 2 sec. The introduction block was followed by 15 further blocks of the same basic structure, each including all 40 stimuli to be learned plus four novel and unfamiliar distracter faces mixed in randomly. Blocks were separated by short breaks. Participants were instructed to learn the faces as well as possible so as to be able to recognize them 1 week later. To facilitate the task, participants had to indicate the familiarity or unfamiliarity of each face by pressing the right or left key, respectively, after which they received visual feedback about response correctness.

Face recognition was tested in a priming experiment 7 days after the learning session (usually at the same time of day). In order to acquire enough trials for ERP analysis, familiar target items were shown in two random sequences, always intermixed with novel, unfamiliar nontargets.

Trials in the priming experiment (Figure 1C) started with a fixation cross shown for 200 msec and replaced by a prime face, presented for 500 msec, followed by a fixation circle. After 1.3 sec, the circle was replaced by a target face for 1.3 sec. The interval between prime onsets was 5 sec. A prime for a target could be either the same stimulus (primed condition) or an unrelated stimulus (unprimed condition), which was either an unfamiliar prime for a familiar target or a familiar prime for an unfamiliar target. Primes and target faces were of the same sex.

Participants were told to ignore the prime stimuli and indicate by keypresses with index fingers whether the target face was familiar. Familiar and unfamiliar faces were again assigned to the right and left response key, respectively. Both speed and accuracy were emphasized, but no feedback was given about response correctness. Short breaks were allowed after every 40 trials.

Seventy-six of the 85 participants accomplished the behavioral test study after the two EEG sessions with a varying time delay ranging from 1 to 41 days. The nine other participants completed the behavioral test study 1 to 8 days before the EEG sessions but never between them.

Event-related Potential Recording

EEG recording was the same for the Dm and priming experiments. The EEG was recorded with sintered Ag/AgCl electrodes mounted in an electrode cap (Easy-Cap, Easycap GmbH, Herrsching-Breitbrunn, Germany) at the scalp positions Fz, Cz, Pz, Iz, Fp1, Fp2, F3, F4, C3, C4, P3, P4, O1, O2, F7, F8, T7, T8, P7, P8, FT9, FT10, P9, P10, PO9, PO10, F9′, F10′, TP9, and TP10 (Pivik et al., 1993). The F9′ and F10′ electrodes were positioned 2 cm anterior to F9 and F10 at the outer canthi of the left and right eyes, respectively. TP9 and TP10 refer to inferior temporal locations over the left and right mastoids, respectively. Initial common reference was TP9; AFz served as ground. Impedances were kept below 5 kΩ. The horizontal electrooculogram (EOG) was recorded from F9′ and F10′. The vertical EOG was monitored from Fp1, Fp2, and two additional electrodes below the left and right eyes, respectively. All signals were recorded with a band-pass of 0.05 to 70 Hz and with a sampling rate of 1000 Hz.

Off-line, ERPs were down-sampled to a rate of 200 Hz. Epochs starting 100 msec before target onset and lasting 1100 msec were generated from the continuous record. ERPs were digitally low-pass filtered at 30 Hz for P100 and N170 and at 10 Hz for all other ERP components. ERPs were recalculated to average reference. Trials with nonocular artifacts, saccades, and incorrect behavioral responses were discarded (0.83% in the Dm experiment and 0.93% in the priming experiment). Trials with ocular blink contributions were corrected using the surrogate variant of the multiple source eye correction (Berg & Scherg, 1994). ERPs were aligned to a 100-msec baseline before target onset, divided into four blocks within each experiment, and then averaged separately for each channel and experimental condition. Table 1 shows the average, standard deviation, and minimum number of trials for all observed ERP components in each of the four blocks as well as the sum across all blocks.

Table 1. 

Average, Standard Deviation, and Minimum Number of Trials for Neurocognitive Indicators


Block 1
Block 2
Block 3
Block 4
Sum
M
SD
Min
M
SD
Min
M
SD
Min
M
SD
Min
M
SD
Min
P100/N170 28 5.2 16 29 5.6 14 26 6.3 15 28 6.2 13 110 20.0 65 
Dm 45 1.5 35 45 0.9 41 45 0.9 41 44 1.4 36 178 3.6 156 
ERE/LRE 35 4.3 20 36 4.1 21 37 3.7 20 36 3.3 22 144 14.2 86 

Block 1
Block 2
Block 3
Block 4
Sum
M
SD
Min
M
SD
Min
M
SD
Min
M
SD
Min
M
SD
Min
P100/N170 28 5.2 16 29 5.6 14 26 6.3 15 28 6.2 13 110 20.0 65 
Dm 45 1.5 35 45 0.9 41 45 0.9 41 44 1.4 36 178 3.6 156 
ERE/LRE 35 4.3 20 36 4.1 21 37 3.7 20 36 3.3 22 144 14.2 86 

Number of trials are given for each of the four blocks, in which ERP components were measured, and for the sum across all blocks.

The P100 and N170, like the ERE and LRE, were measured within the same trials. The number of trials is identical for these ERP components.

Performance Measurement

Recognition performance in the Dm experiment was assessed by determining the area below the ROC curve [P(A); Sommer et al., 1991; Green & Swets, 1966], for which the faces from the study blocks constituted signals and the distracters noise.

Responses in the priming experiment were considered correct if the appropriate key was pressed between 100 and 2000 msec after target onset. Mean reaction times were calculated for correct responses only. Hit rates were used as measures of accuracy.

Data Analysis

Data analysis for the behavioral test study is described in detail in Wilhelm et al. (in press) and Herzmann et al. (2008). The proportion of correct responses was used as the accuracy measure for all indicators of face perception, face memory, immediate and delayed memory, general cognitive ability, and object cognition. Only reaction times for correct responses were used as speed measures for all indicators of face cognition speed and mental speed. Reaction times were inverted to yield a measure that indicates the amount of correct responses per second. Thus, good (i.e., fast) performance is reflected in high test values.

Relationships between neurocognitive and behavioral indicators were estimated with latent variable techniques (Bollen, 1989). These techniques comprehensively test theories about linear relationships between multiple entities, and they explicitly account for measurement errors. They draw clear distinctions between constructs (i.e., latent factors), which, like mental abilities, are not directly observed, and indicators (i.e., manifest variables), which represent the observed task values. Latent factors and their mutual relationships are deduced and validated in measurement models using confirmatory factor analysis (e.g., Figures 2 and 4). Latent factors represent the common variance of their multiple indicators. Indicators that assess the same latent factor should correlate higher with one another than with indicators that assess different latent factors. Structural equation models (e.g., Figures 5 and 6) accurately estimate relationships between multiple measurement models and allow the reliability of a model to be assessed explicitly.

To model ERP components as latent factors, the experiments were divided into four blocks. ERP components were measured in each block for each participant, yielding four manifest variables. We established measurement models for each ERP component and calculated regressions of neurocognitive and behavioral indicators in structural equation models. The quality of these models was estimated using multiple formal statistical tests and fit indices: the chi-square test, Root Mean Square Error of Approximation (RMSEA), and Comparative Fit Index (CFI).1

EEG recording offers information about the activation of a neural process (amplitude), the spatial arrangement of this activation (topography), and the change in activation over time (latency). In order to exploit all these aspects, we used the topographic component recognition method (TCR, Herzmann & Sommer, 2007; Brandeis, Naylor, Halliday, Callaway, & Yano, 1992) to obtain individual test values of each participant's Dm, ERE, and LRE. TCR estimates the contribution of a specific ERP component to a given ERP. We selected from grand mean ERPs a standardized template map for the ERP component in question. We then calculated the covariance over time between this template map and nonstandardized maps of the ERPs in the dataset of every participant. This resulted in estimates of the amount of the respective component contained within ERPs of every individual. Two measures were used as indicators of ERP components: (a) the peak amplitude and (b) the peak latency of the covariation between the template map and the individual ERP. The individual measure of the Dm was calculated as the covariation of the grand mean Dm with each trial in the study blocks of the Dm experiment (regardless if faces were later recognized or forgotten); and then the covariation was averaged across single trials. We thus obtained an indicator of memory encoding independent of recognition performance. For the EREs and LREs, the covariation was calculated for learned and unfamiliar faces separately using the grand mean EREs and LREs (i.e., differences waves primed minus unprimed trials) and individual difference waves.

RESULTS

Behavioral Data

The measurement model of face cognition was first applied to the subset of 85 participants (see Figure 2). The fit of this model was good (χ2 = 99, df = 71, RMSEA = .069, CFI = .951), but in contrast to the model obtained for all 209 participants, the correlation between accuracy of face perception and accuracy of face memory was considerably larger (r = .90 compared to r = .75). Constraining this correlation to 1 did not decrease the model fit (Δχ2 = 1, Δdf = 1, RMSEA = .068, CFI = .951); both factors could thus be taken to represent a single ability, which we called face cognition accuracy.

All participants successfully completed both EEG sessions. In the Dm experiment, recognition performance was clearly above chance [t(83) = 28.1, p < .001, P(A): M = 0.695, SD = 0.06, SE = 0.007]. For learned faces, participants used the category of low stimulus familiarity less often than the other categories [F(3, 249) = 36.1, p < .001, Ms = 16.7%, 21.1%, 25.3%, and 36.9%, from unfamiliar to familiar]. The reverse pattern was found for the distracters [F(3, 249) = 55.8, p < .001, Ms = 37.1%, 31.2%, 21.6%, and 10.1%, from unfamiliar to familiar].

In the priming experiment, hit rates were generally high and showed an interaction of priming and familiarity [F(1, 84) = 24.8, p < .001]. For learned faces, hit rates were higher in the primed (M = 0.90, SD = 0.08, SE = 0.01) than in the unprimed condition (M = 0.88, SD = 0.08, SE = 0.01) [F(1, 84) = 12.7, p < .01, Cohen's d = 0.25]. The reverse pattern was found for unfamiliar faces, which showed lower hit rates in the primed (M = 0.89, SD = 0.12, SE = 0.01) than in the unprimed condition (M = 0.91, SD = 0.09, SE = 0.01) [F(1, 84) = 17.6, p < .001, Cohen's d = −0.19].

Reaction times revealed strong main effects of priming [F(1, 84) = 545.5, p < .001] and familiarity [F(1, 84) = 167.9, p < .001]. Because the priming effect was stronger for learned than for unfamiliar faces, there was also an interaction of both factors [F(1, 84) = 88.4, p < .001]. Priming speeded up reaction times for learned faces (for primed condition M = 573, SD = 109, SE = 11; for unprimed condition M = 748, SD = 101, SE = 11) [F(1, 84) = 1065.1, p < .001, Cohen's d = 1.66], and to a lesser extent, for unfamiliar faces (for primed condition M = 695, SD = 115, SE = 12; unprimed condition M = 791, SD = 120, SE = 13) [F(1, 84) = 187.9, p < .001, Cohen's d = 0.81].

Psychophysiological Data

Figure 3 depicts amplitudes and topographies of all ERP components. The P1002 for subsequently recognized faces in the study blocks of the Dm experiment was measured as peak latency and peak amplitude between 75 and 135 msec at PO10. Mean peak latency was 103 msec (SD = 9.2, SE = 0.99) and mean peak amplitude was 8.5 μV (SD = 4.1, SE = 0.45). No differences due to memory encoding were observed in the P100, Fs < 1. The N1702 for subsequently recognized faces in the study blocks of the Dm experiment was measured as peak latency and peak amplitude between 120 and 190 msec at P10. Mean peak latency was 155 msec (SD = 11.1, SE = 1.2) and mean peak amplitude was −7.0 μV (SD = 4.9, SE = 0.53). No differences due to memory encoding were found for the N170, Fs < 2.7.

Figure 3. 

Amplitudes and topographies of all ERP components. Amplitudes are depicted at the electrode sites where the ERP components were most pronounced (at PO10 for the P100, at P10 for the N170, Pz for the Dm, TP10 for early repetition effects, and Pz for late repetition effects). For P100 and N170, topographies show the distribution of amplitudes for a single time point. For all other ERP components, topographical distributions are shown across time segments used for statistical analyses and the topographic component recognition. The topography of the Dm depicts the difference wave between mean amplitudes of subsequently recognized minus subsequently forgotten faces. Topographies of priming effects show difference waves between mean amplitudes of primed minus unprimed faces. Spherical spline interpolation was used. Equipotential lines are separated by 0.5 μV; negativity is gray.

Figure 3. 

Amplitudes and topographies of all ERP components. Amplitudes are depicted at the electrode sites where the ERP components were most pronounced (at PO10 for the P100, at P10 for the N170, Pz for the Dm, TP10 for early repetition effects, and Pz for late repetition effects). For P100 and N170, topographies show the distribution of amplitudes for a single time point. For all other ERP components, topographical distributions are shown across time segments used for statistical analyses and the topographic component recognition. The topography of the Dm depicts the difference wave between mean amplitudes of subsequently recognized minus subsequently forgotten faces. Topographies of priming effects show difference waves between mean amplitudes of primed minus unprimed faces. Spherical spline interpolation was used. Equipotential lines are separated by 0.5 μV; negativity is gray.

Figure 4. 

Measurement model for ERP components (here the N170 latency) (χ2 = 0.13, df = 2, SRMR = .003, CFI = 1.0, n = 85).

Figure 4. 

Measurement model for ERP components (here the N170 latency) (χ2 = 0.13, df = 2, SRMR = .003, CFI = 1.0, n = 85).

Figure 5. 

Structural equation model used to estimate relationships between face cognition abilities and ERP components (here the N170 latency) (χ2 = 184, df = 128, RMSEA = .072, CFI = .933, n = 85). The correlation between face cognition accuracy and the speed of face cognition was a priori set to zero.

Figure 5. 

Structural equation model used to estimate relationships between face cognition abilities and ERP components (here the N170 latency) (χ2 = 184, df = 128, RMSEA = .072, CFI = .933, n = 85). The correlation between face cognition accuracy and the speed of face cognition was a priori set to zero.

Figure 6. 

Structural equation model used to test the independence of the relationships between face cognition abilities and ERP components (here the ERE amplitude) from established cognitive abilities (here immediate and delayed memory [I & D memory] and mental speed) (χ2 = 207, df = 160, CFI = .934, RMSEA = .059, n = 85).

Figure 6. 

Structural equation model used to test the independence of the relationships between face cognition abilities and ERP components (here the ERE amplitude) from established cognitive abilities (here immediate and delayed memory [I & D memory] and mental speed) (χ2 = 207, df = 160, CFI = .934, RMSEA = .059, n = 85).

The Dm was quantified by comparing average amplitudes for subsequently recognized to subsequently forgotten faces between 600 and 900 msec after stimulus onset in the study blocks of the Dm experiment. Subsequently recognized faces differed significantly from subsequently forgotten faces [F(31, 2573)3 = 13.5, p < .001, Cohen's d = 0.56]; their difference wave showed the characteristic topography (Sommer et al., 1991). For correlational analyses, peak amplitude and peak latency of the TCR measure for the Dm were determined for each participant between 600 and 900 msec. Mean peak latency was 746 msec (SD = 73.4, SE = 8.0) and mean peak amplitude of the covariation with the template map was 10.1 (SD = 7.2, SE = 0.78).

The ERE was quantified by comparing average amplitudes for primed and unprimed faces between 260 and 330 msec after target onset. Primed and unprimed stimuli differed significantly for learned [F(31, 2604)3 = 73.8, p < .001, Cohen's d = 1.32] and unfamiliar faces [F(31, 2604)3 = 5.3, p < .01, Cohen's d = 0.35]. For learned faces, the ERE (i.e., difference wave of primed minus unprimed trials) showed the same topography as in other studies (Herzmann & Sommer, 2007; Pfütze et al., 2002). Amplitudes and topographies4 of EREs differed significantly between learned and unfamiliar faces [Fs(31, 2604)3 = 58.7 and 39.4, ps < .001, Cohen's ds = 1.18 and 0.96, respectively]. Different amplitudes indicated a larger priming effect for learned than for unfamiliar faces. Different topographies indicated separate underlying generators. Peak amplitudes and latencies of the TCR measures for the EREs were calculated as the difference waves (primed minus unprimed) between 200 and 500 msec. For learned faces, mean peak latency was 345 msec (SD = 43.1, SE = 4.67) and mean peak amplitude of the covariation was 16.8 (SD = 6.2, SE = 0.67); for unfamiliar faces, mean peak latency was 321 msec (SD = 53.0, SE = 5.74) and mean peak amplitude was 5.2 (SD = 2.4, SE = 0.26).

The LRE was quantified by comparing average amplitudes for primed and unprimed faces between 330 and 480 msec after target onset. Primed and unprimed trials differed significantly for learned [F(31, 2604)3 = 93.8, p < .001, Cohen's d = 1.49] and unfamiliar faces [F(31, 2604)3 = 21.8, p < .001, Cohen's d = 0.71]. The LRE (i.e., difference wave of primed minus unprimed trials) for learned faces showed the same topography as previously reported (Herzmann & Sommer, 2007; Pfütze et al., 2002). Amplitudes and topographies4 of LREs for learned and unfamiliar faces differed significantly [Fs(31, 2604)3 = 66.0 and 48.5, ps < .001, Cohen's ds = 1.25 and 1.07, respectively], indicating a larger priming effect and different underlying sources for learned than for unfamiliar faces. Amplitudes and topographies of ERE and LRE for learned faces differed significantly [Fs(31, 2604)3 = 36.3 and 42.6, ps < .001, Cohen's ds = 0.92 and 1.00, respectively]. Peak amplitudes and peak latencies of the TCR measures for the LREs were determined in difference waves (primed minus unprimed) between 300 and 700 msec. For learned faces, mean peak latency was 393 msec (SD = 47.7, SE = 5.17) and mean peak amplitude was 17.6 (SD = 6.8, SE = 0.74); for unfamiliar faces, mean peak latency was 465 msec (SD = 66.9, SE = 7.26) and mean peak amplitude was 10.4 (SD = 4.7, SE = 0.51).

Correlational Analyses

We tested whether the ERP components met the prerequisites to be used in latent variable techniques. Internal consistencies for all ERP components were high (Cronbach's αs > .50). Skewness (ranging from −1.04 to 1.09) and kurtosis (ranging from −0.87 to 1.92) were in the normal range. Kolmogorov–Smirnoff tests indicated no deviation from the normal distribution in the ERPs [.53 < KSs(85) > 1.08, ps > .20], which thus met the requisites.

All parameters in the correlational analyses were estimated via the maximum likelihood algorithm in the statistical software Amos 7 (Arbuckle, 2007).

Measurement Models of ERP Components

We estimated measurement models to test whether only one latent factor accounted for individual differences in each ERP, and thus, if the ERP components were unidimensional. Figure 4 shows an example of a measurement model for ERP components. Table 2 summarizes regression weights for all measurement models and their model fit indices. For amplitude measures of the P100, Dm, ERE, and LRE, RMSEA indices were rather high and CFI indices were very good. This pattern might show that a very small amount of systematic variation is left in the residuals, although almost all variance of the indicators is accounted for by the latent factors (indicated by a high CFI). Another fit index, the Standardized Root Mean Square Residual (SRMR), was calculated to ensure psychometric quality. The SRMR measures the average of standardized residuals between observed and model-implied covariance matrices. SRMRs indicated good model fits (SRMRs < .08). All measurement models of ERP components thus showed good model fits.

Table 2. 

Regression Weights (γ1 to γ4) and Fit Indices of Measurement Models for ERP Components

Latent Factor
γ1
γ2
γ3
γ4
CFI
RMSEA
SRMR
χ2(df = 2)
P100 latency .84 .81 .86 .82 1.0 .000 .007 0.39 
P100 amplitude .97 .93 .91 .95 .994 .126 .008 4.68 
N170 latency .83 .86 .86 .85 1.0 .000 .003 0.13 
N170 amplitude .92 .95 .95 .96 1.0 .000 .005 1.80 
Dm latency .57 .63 .59 .68 1.0 .000 .012 0.32 
Dm amplitude .85 .93 .96 .95 .994 .117 .013 4.32 
ERE for learned faces latency .72 .50 .74 .61 .997 .037 .033 2.23 
ERE for learned faces amplitude .48 .52 .70 .71 .922 .16 .052 6.30 
ERE for unfamiliar faces latency .38 .67 .51 .42 1.0 .000 .038 1.92 
ERE for unfamiliar faces amplitude .63 .63 .75 .78 .987 .083 .030 3.17 
LRE for learned faces latency .51 .57 .60 .62 1.0 .000 .020 0.78 
LRE for learned faces amplitude .59 .54 .74 .68 .961 .123 .042 4.56 
LRE for unfamiliar faces latency .28 .78 .55 .29 1.0 .000 .036 1.77 
LRE for unfamiliar faces amplitude .66 .66 .57 .55 1.0 .000 .028 1.81 
Latent Factor
γ1
γ2
γ3
γ4
CFI
RMSEA
SRMR
χ2(df = 2)
P100 latency .84 .81 .86 .82 1.0 .000 .007 0.39 
P100 amplitude .97 .93 .91 .95 .994 .126 .008 4.68 
N170 latency .83 .86 .86 .85 1.0 .000 .003 0.13 
N170 amplitude .92 .95 .95 .96 1.0 .000 .005 1.80 
Dm latency .57 .63 .59 .68 1.0 .000 .012 0.32 
Dm amplitude .85 .93 .96 .95 .994 .117 .013 4.32 
ERE for learned faces latency .72 .50 .74 .61 .997 .037 .033 2.23 
ERE for learned faces amplitude .48 .52 .70 .71 .922 .16 .052 6.30 
ERE for unfamiliar faces latency .38 .67 .51 .42 1.0 .000 .038 1.92 
ERE for unfamiliar faces amplitude .63 .63 .75 .78 .987 .083 .030 3.17 
LRE for learned faces latency .51 .57 .60 .62 1.0 .000 .020 0.78 
LRE for learned faces amplitude .59 .54 .74 .68 .961 .123 .042 4.56 
LRE for unfamiliar faces latency .28 .78 .55 .29 1.0 .000 .036 1.77 
LRE for unfamiliar faces amplitude .66 .66 .57 .55 1.0 .000 .028 1.81 

Brain–Behavior Relationships for Face Cognition Abilities

Latent regression analyses

Contributions of neurocognitive indicators to individual differences in face cognition were determined as latent regressions in structural equation models. In these models, the correlation of face cognition speed with face cognition accuracy was a priori set to zero. Figure 5 illustrates an example of a structural equation model. Table 3 depicts regression weights for the ERP components and shows the fit indices of the structural equation models.

Table 3. 

Standardized Regression Weights and Fit Indices of Structural Equation Models for Latent Regression Analyses between ERP Components and Face Cognition Abilities

Latent Factor
Face Cognition Accuracy
Face Cognition Speed
CFI
RMSEA
χ2(df = 130)
P100 latency .07 .14 .943 .065 175 
P100 amplitude .09 −.01 .961 .061 170 
N170 latency −.30* .19 .932 .072 187 
N170 amplitude .08 −.10 .967 .056 164 
Dm latency −.20 .11 .947 .055 163 
Dm amplitude .19 −.07 .938 .075 192 
ERE for learned faces latency −.33* −.12 .958 .050 157 
ERE for learned faces amplitude .41** .46*** .943 .059 167 
ERE for unfamiliar faces latency −.11 .11 .900 .080 200 
ERE for unfamiliar faces amplitude .13 −.11 .938 .062 172 
LRE for learned faces latency −.48*** −.23 .965 .045 152 
LRE for learned faces amplitude .31* .35** .954 .052 160 
LRE for unfamiliar faces latency .14 .22 .923 .066 178 
LRE for unfamiliar faces amplitude .14 −.15 .932 .063 173 
Latent Factor
Face Cognition Accuracy
Face Cognition Speed
CFI
RMSEA
χ2(df = 130)
P100 latency .07 .14 .943 .065 175 
P100 amplitude .09 −.01 .961 .061 170 
N170 latency −.30* .19 .932 .072 187 
N170 amplitude .08 −.10 .967 .056 164 
Dm latency −.20 .11 .947 .055 163 
Dm amplitude .19 −.07 .938 .075 192 
ERE for learned faces latency −.33* −.12 .958 .050 157 
ERE for learned faces amplitude .41** .46*** .943 .059 167 
ERE for unfamiliar faces latency −.11 .11 .900 .080 200 
ERE for unfamiliar faces amplitude .13 −.11 .938 .062 172 
LRE for learned faces latency −.48*** −.23 .965 .045 152 
LRE for learned faces amplitude .31* .35** .954 .052 160 
LRE for unfamiliar faces latency .14 .22 .923 .066 178 
LRE for unfamiliar faces amplitude .14 −.15 .932 .063 173 

*p < .05.

**p < .01.

***p < .001.

The N170 latency5 and the latencies and amplitudes of the ERE and LRE for learned faces contributed moderately to face cognition abilities. The P1005, the amplitude of the N170, Dm, ERE for unfamiliar faces, and LRE for unfamiliar faces did not contribute significantly to individual differences in face cognition.

Linear regression analyses

In order to estimate the amount of variance explained by several neurocognitive indicators, linear regression analyses were calculated separately for face cognition accuracy and for the speed of face cognition. Only ERP components that contributed significantly to face cognition abilities (i.e., N170 latency, ERE and LRE for learned faces) were considered. Because ERP components obtained in the same experiment (e.g., ERE and LRE) as well as latencies and amplitudes of a single ERP component are interdependent, we used measures of ERP components obtained in different, experimentally independent blocks within a given experiment.

The model [F(5, 84) = 4.3, p < .01] for face cognition accuracy that included the N170 latency (β = −.16), the latencies and amplitudes of the ERE for learned faces (βs = −.09 and .35, respectively), and the latencies and amplitudes of the LRE for learned faces (βs = −.16 and .05, respectively) yielded an R of .46 and accounted for 21% of the variance. The model [F(2, 84) = 6.5, p < .01] for the speed of face cognition that included amplitudes of the ERE for learned faces (β = .33) and the LRE for learned faces (β = .10) yielded an R of .37 and accounted for 14% of the variance.

Brain–Behavior Relationships for Established Cognitive Abilities

Relationships between neurocognitive processes and the four established cognitive abilities were determined in structural equation models using latent variables for ERP components and factor scores for cognitive abilities. Factor scores were obtained from the measurement model of established cognitive abilities determined by Wilhelm et al. (in press). Correlations were calculated between one neurocognitive process and one cognitive ability at a time. Table 4 shows correlations between ERP components and established cognitive abilities.

Table 4. 

Correlations of ERP Components with Established Cognitive Abilities

Latent Factor
Object Cognition
Immediate and Delayed Memory
General Cognition Ability
Mental Speed
P100 latency −.03 −.02 −.07 −.01 
P100 amplitude .01 −.04 −.12 −.15 
N170 latency −.31** −.11 −.21 .08 
N170 amplitude −.09 .10 .08 −.09 
Dm latency −.10 −.18 −.16 .03 
Dm amplitude .10 −.02 −.10 .02 
ERE for learned faces latency −.31* −.30* −.31* −.27* 
ERE for learned faces amplitude .07 .26* .16 .28* 
ERE for unfamiliar faces latency −.28* .01 −.32* .15 
ERE for unfamiliar faces amplitude .08 −.11 −.21 .00 
LRE for learned faces latency −.31* −.10 −.29 −.24 
LRE for learned faces amplitude −.08 .18 .02 .14 
LRE for unfamiliar faces latency .03 .18 .13 −.07 
LRE for unfamiliar faces amplitude −.02 .05 −.15 .09 
Latent Factor
Object Cognition
Immediate and Delayed Memory
General Cognition Ability
Mental Speed
P100 latency −.03 −.02 −.07 −.01 
P100 amplitude .01 −.04 −.12 −.15 
N170 latency −.31** −.11 −.21 .08 
N170 amplitude −.09 .10 .08 −.09 
Dm latency −.10 −.18 −.16 .03 
Dm amplitude .10 −.02 −.10 .02 
ERE for learned faces latency −.31* −.30* −.31* −.27* 
ERE for learned faces amplitude .07 .26* .16 .28* 
ERE for unfamiliar faces latency −.28* .01 −.32* .15 
ERE for unfamiliar faces amplitude .08 −.11 −.21 .00 
LRE for learned faces latency −.31* −.10 −.29 −.24 
LRE for learned faces amplitude −.08 .18 .02 .14 
LRE for unfamiliar faces latency .03 .18 .13 −.07 
LRE for unfamiliar faces amplitude −.02 .05 −.15 .09 

All CFIs > .922, all RMSEA < .087, all SRMR < .066, and all χ2 < 11.29, all df = 5.

*p < .05.

**p < .01.

The N170 latency, the latency and amplitude of the ERE for learned faces, the latency of the ERE for unfamiliar faces, the latency of the LRE for learned faces all revealed small to moderate correlations to one or more cognitive abilities. The P100, N170 amplitude, Dm, amplitude of the ERE for unfamiliar faces, and LRE for unfamiliar faces did not show any significant correlations.

The Specificity of ERP Components for Face Cognition

We used structural equation models to test whether relationships between ERP components and face cognition abilities are specific to face cognition or are moderated by the cognitive abilities associated with the ERP components. We used factor scores for cognitive abilities. Figure 6 contains an example structural equation model. Table 5 shows correlations between the residues of latent factors for ERP components and for face cognition abilities after controlling for individual differences in cognitive abilities.

Table 5. 

Correlations between ERP Components and Face Cognition Abilities When Controlling for Those Established Cognitive Abilities Related to the ERP Components

Latent Factor
Face Cognition Accuracy
Face Cognition Speed
Controlled for
N170 latency −.19 .20 object cognition 
ERE for learned faces latency −.13 .05 object cognition, I & D memory, general cognitive ability, mental speed 
ERE for learned faces amplitude .39* .42* I & D memory, mental speed 
LRE for learned faces latency −.39* −.23 object cognition 
Latent Factor
Face Cognition Accuracy
Face Cognition Speed
Controlled for
N170 latency −.19 .20 object cognition 
ERE for learned faces latency −.13 .05 object cognition, I & D memory, general cognitive ability, mental speed 
ERE for learned faces amplitude .39* .42* I & D memory, mental speed 
LRE for learned faces latency −.39* −.23 object cognition 

All CFIs > .923, all RMSEA < .072 and all χ2 < 251, all df < 191.

*p < .05.

Correlations of face cognition abilities with the amplitude of the ERE for learned faces and the amplitude and latency of the LRE for learned faces remained significant after controlling for contributions of established cognitive abilities to face cognition abilities and ERP components. All other brain–behavior relationships diminished to insignificance.

Correlations among Neurocognitive Indicators across Individuals

Product–moment correlations were calculated to specify relationships among neurocognitive indicators. We used ERP parameters measured in separate blocks within the same experiment to obtain independent estimates. Table 6 shows correlations for all ERP components between latency measures (below main diagonal), between amplitude measures (above main diagonal), and between latencies and amplitudes (main diagonal, shaded).

Table 6. 

Pearson Product–Moment Correlations among Latencies of ERP Components (below Main Diagonal), among Amplitudes of ERP Components (above Main Diagonal), and between Latencies and Amplitudes of the Same ERP Components (Main Diagonal, Shaded)


1
2
3
4
5
6
7
1. P100 .04 −.09 .06 .18 .13 .06 .21 
2. N170 .45*** .09 −.48*** .03 −.29** −.23* −.25* 
3. Dm .01 .15 −.45*** .19 .28* .22* .10 
4. ERE for learned faces .09 .20 .26* .08 .25* .31** .12 
5. ERE for unfamiliar faces .04 .10 −.20 −.03 .04 .18 .35*** 
6. LRE for learned faces .09 .37*** .20 .42*** .09 −.13 .23* 
7. LRE for unfamiliar faces .09 −.02 −.09 −.05 −.09 .14 −.19 

1
2
3
4
5
6
7
1. P100 .04 −.09 .06 .18 .13 .06 .21 
2. N170 .45*** .09 −.48*** .03 −.29** −.23* −.25* 
3. Dm .01 .15 −.45*** .19 .28* .22* .10 
4. ERE for learned faces .09 .20 .26* .08 .25* .31** .12 
5. ERE for unfamiliar faces .04 .10 −.20 −.03 .04 .18 .35*** 
6. LRE for learned faces .09 .37*** .20 .42*** .09 −.13 .23* 
7. LRE for unfamiliar faces .09 −.02 −.09 −.05 −.09 .14 −.19 

*p < .05.

**p < .01.

***p < .001.

Latencies of ERP components showed fewer significant correlations than amplitudes. Correlations between latencies and amplitudes of the same ERP components were generally absent, apart from the moderate correlation for the Dm.

DISCUSSION

The present study aimed to estimate brain–behavior relationships between face cognition abilities and ERP components sensitive to subprocesses of vision and face cognition. Face cognition accuracy was moderately related to the N170 latency, which reflects structural face encoding, and to the latencies and amplitudes of the ERE and LRE, which reflect the access to representations of faces and of person knowledge, respectively. The speed of face cognition was moderately related to the amplitudes of the ERE and LRE. Linear regression analyses, calculated to estimate the combined contributions of several ERP components, revealed that the ERE amplitude for learned faces showed the highest contributions to individual differences in face cognition. The brain–behavior relationships between face cognition abilities and the latencies of the N170 and the ERE for learned faces were substantially moderated by mental speed, immediate and delayed memory, object cognition, and general cognitive ability. Relationships between face cognition abilities and the amplitude of the ERE for learned faces, and between face cognition abilities and the latency and amplitude of the LRE for learned faces, were not moderated by the measured cognitive abilities and can be assumed to be specific to face cognition.

Measurement Model of Face Cognition

Although this study was based on a subset of the participants from which the measurement model of face cognition was initially established (Wilhelm et al., in press), one important aspect of the model was not replicated. There was no distinction in the subset between face perception and face memory. These factors were therefore combined into a single latent factor called face cognition accuracy. One possible reason for this high correlation is the relatively small number of participants; the correlation was smaller in the sample as a whole (r = .75). Another possible reason is the differential experience of the participants with experimental sessions. Only individuals in the subset had experience with similar experimental tasks prior to completing the behavioral study; 90% of these participants had taken part in the two EEG sessions with extensive experimental face processing tasks. They might have performed differently in the behavioral test study because of their “training” during the preceding EEG sessions.

The new ability of face cognition accuracy, such as the speed of face cognition, subsumes perceptual and memory processes. It thus represents individual differences in accuracy of perceiving, learning, and recognizing faces. In our opinion, this limitation in resolution between perception and memory processes at the behavioral level does not diminish the value of the present study, whose primary aim was to relate ability factors to specific neurocognitive subprocesses reflected in ERP parameters. Performance measures in the obtained ability factors are still more comprehensive and meaningful than performance in a single face task.

Face Cognition Accuracy

Matching, perceiving, learning, and recognizing faces at a high level of accuracy are related to shorter latencies of the N170 and to shorter latencies and larger amplitudes of the ERE and LRE for learned faces. Linear regression analyses showed that the amplitude of the ERE for learned faces explained the highest portion of variance in face cognition accuracy.

Individual differences in the latency of the N170 contributed moderately to individual differences in face cognition accuracy. Thus, people with faster structural encoding of faces showed better performance in face cognition accuracy. They were also quicker to activate brain regions necessary to encode faces configurally and holistically (Deffke et al., 2007). These results indicate that fast configural and holistic processing of faces is a foundation for accurately learning and recognizing faces.

Face cognition accuracy was also related to the latency and amplitude of the ERE for learned faces, indicating that individuals with superior accuracy in face cognition exhibit a faster and larger ERE. Assumptions made by facilitation models of repetition suppression, the neural phenomenon thought to generate priming processes (cf. Grill-Spector, Henson, & Martin, 2006), can help explain these findings. Facilitation models propose that priming (as opposed to single presentations of stimuli) leads to faster neural processing and shorter durations of neural firing. Processing of primed stimuli thus has shorter latencies and smaller amplitudes than unprimed stimuli. Better performance is expected to be associated with a large facilitation effect. In difference waves of ERPs between primed and unprimed stimuli, a larger facilitation effect would be reflected in shorter latencies and larger amplitudes. This pattern can be seen in the present results (cf. Figure 4 for EREs and LREs of learned faces). The relationships of face cognition accuracy to the latency and to the amplitude of the ERE indicate that individuals with a larger facilitation effect possessed better face cognition. More accurate performance is associated with faster and more efficient activation of representations of faces, presumably localized in fusiform face-responsive regions (Eger et al., 2005; Schweinberger et al., 2002). Because latencies and amplitudes of the ERE were not correlated to one another, faster activation and shorter durations of neural firing contribute additively to better performance.

The same pattern of results was found for the LRE for learned faces as for the ERE. People with a faster and larger LRE are more accurate when perceiving, learning, and recognizing faces. Faster and more efficient activation of person-related knowledge contributes to better performance in face cognition because familiarity with a face is not only caused by recognizing a face but also by recalling information about the person. As in a previous study (Herzmann & Sommer, 2007), faces were learned without any biographical or name information. The present LRE was comparable to the one found for famous faces (Pfütze et al., 2002; Schweinberger et al., 2002). Biographical facts such as sex, age, or idiosyncratic resemblance to familiar people can be assumed to be derived during learning, stored as semantic knowledge, and activated when recognizing these faces, thus eliciting an LRE. Latencies and amplitudes of the LRE did not correlate. Thus, better performance in face cognition is the result of faster activation of person-related knowledge and of shorter durations of neural firing when accessing this knowledge in temporal brain regions (Gobbini & Haxby, 2007).

Speed of Face Cognition

Quicker performance when perceiving, learning, and recognizing faces is related to a larger amplitude of the ERE for learned faces and to a larger amplitude of the LRE for learned faces. Linear regression analyses showed that the ERE amplitude for learned faces explained the greatest portion of variance in face cognition speed.

Individuals who show larger EREs and LREs are not only more accurate in face cognition; they are also faster. Neural efficiency in accessing face structures in fusiform gyrus (Eger et al., 2005; Schweinberger et al., 2002) and person-related knowledge in temporal cortex (Gobbini & Haxby, 2007) facilitates both the accuracy and speed of face cognition. The contributions of the amplitudes of ERE and LRE are only moderate. Other neurocognitive subprocesses not measured in this study—such as response selection, motor programming, decision strategies, declarative memory, emotion processing, or attention—might also contribute to the speed or accuracy of face cognition.

Negative Findings

Some of the ERP components contributed to neither the accuracy nor the speed of face cognition. ERE and LRE for unfamiliar faces showed no contributions to individual differences in face cognition. It could be argued that the failure to distinguish between face perception and face memory on the behavioral level concealed a correlation of the ERE for unfamiliar faces to face perception. Previous studies have suggested such a relationship, linking it to the structural encoding, short-term storage, and activation of pictorial codes of a picture (Herzmann & Sommer, 2007; Schweinberger et al., 2002).

We found no evidence for contributions of the P100 to individual differences in face cognition. This result raises doubts that the P100 is face-sensitive (Doi et al., 2007) because we did not find a correlation between the P100 and face cognition accuracy, which was assessed with indicators using such face-specific paradigms as the part–whole and the inversion paradigm.

Individual differences in the N170 amplitude did not contribute to face cognition. One might speculate that they are governed by performance-irrelevant variations in brain geometry, such as the orientation of certain gyri and sulci. According to our results, the N170 amplitude cannot be taken as an indicator of good or poor holistic processing in the normal variation of face cognition. Because our sample did not include people with prosopagnosia, for whom reduced N170 amplitudes have been linked to their severely impaired face cognition skills (Kress & Daum, 2003), no statements about brain–behavior relationships in the extremes of face cognition abilities can be made.

The Dm revealed no significant contribution to any face cognition ability. Individual differences in the speed and activation of the network underlying memory encoding are therefore not associated with individual variations in face cognition. This finding reflects that encoding-related brain activation as represented by the Dm is only a small part of the range of perceptual and memory processes subsumed by face cognition abilities. For instance, no behavioral task can measure individual differences in memory encoding of faces independently from individual differences in face recognition.

Finally, in contrast to our expectations, none of the individual differences in the latencies of neurocognitive indicators for early visual or memory-related processes of face cognition were associated with individual differences in face cognition speed. Individual differences in the speed of other neurocognitive subprocesses—such as response selection or motor programming, reflected in the lateralized readiness potential (Coles, 1989)—might make greater contributions to face cognition speed. This assumption is supported by the absence of correlations between the ERP components and mental speed and by the high correlation (r = .67) between face cognition speed and mental speed (Wilhelm et al., in press). The latter correlation shows that general speed processes substantially moderate individual differences in face cognition speed. The remaining face-specific portion of face cognition speed, however, has no determinant in the ERP components observed in this study.

The Specificity of ERP Components for Face Cognition

Established cognitive abilities showed only a few, small-to-moderate relationships with the ERP components. The correlations of ERP components with object cognition were of comparable magnitude as their correlations with face cognition abilities. This is in line with previous studies showing that these ERPs are also elicited by other visual stimuli (Herzmann & Sommer, 2007; Engst, Martín-Loeches, & Sommer, 2006; Bentin et al., 1996). Individual differences in domain-general visual processing can be assumed to be the common source for the comparable correlations. We therefore tested to what extent established cognitive abilities moderated the observed brain–behavior relationships.

Contributions from the N170 and ERE for learned faces to individual differences in face cognition on the behavioral level were governed primarily by cognitive abilities not specific to faces. We have already shown (Wilhelm et al., in press) that individual differences in established cognitive abilities moderate individual differences in face cognition abilities. The present study provides further evidence that these established cognitive abilities also moderate relationships between face cognition abilities and the latencies of the N170 and ERE. These ERP components do not contribute specifically to individual differences in face cognition.

In contrast, relationships of face cognition abilities to the amplitude of the ERE for learned faces and to the latency and amplitude of the LRE for learned faces were independent from individual differences in established cognitive abilities. The contribution of these ERP components is specific to face cognition. It is possible, however, that other cognitive abilities for which we did not control, such as attention or such personality traits as extraversion, could, nonetheless, contribute to individual differences in face cognition.

Further evidence that the amplitude of the ERE and the latency and amplitude of the LRE are specific to face cognition comes from relationships of face cognition abilities to classic ERP components like the P300. The P300 is related to a variety of cognitive functions (Polich, 2007; Beauchamp & Stelmack, 2006; Leuthold & Sommer, 1998; Fabiani, Karis, & Donchin, 1986) and was measured in the priming experiment for unprimed learned faces. Its latency contributed moderately to face cognition accuracy (−.36); its amplitude made a small contribution to face cognition speed (.25). Linear regression analysis showed that the contribution of the P300 is negligible in comparison to the ERE and LRE. Individual differences in established cognitive abilities also explained the relationships between the P300 and face cognition abilities. These results provide incremental validity for the specificity of the ERE and LRE to face cognition.

Correlations among Neurocognitive Indicators across Individuals

From the perspective of individual differences, this study also provided new evidence about how ERP components correlate with one another. Only moderate correlations among individual differences in neurocognitive indicators were found. ERP components can thus be taken as indicators of specific, independent neurocognitive processes. A relationship between the latencies of the P100 and the N170 demonstrates the temporal dependency of these perceptual processes. Correlations between the latencies of the ERE and LRE for learned faces and the Dm show the temporal dependency of these memory processes. A positive correlation between the temporal dependency of the N170 and LRE provides evidence for a serial recognition process of familiar faces (Bruce & Young, 1986).

ERP amplitudes revealed a less clear picture. The following ERP components representing memory processes showed moderate correlations to each other: Dm, ERE for learned faces, ERE for unfamiliar faces, and LRE for learned faces. It is not surprising that the ERE for unfamiliar faces belongs to this group because it represents the encoding, short-term storage, and activation of pictorial codes. A surprisingly large correlation was found between N170 and Dm. For the global brain activation to cause this correlation, all ERP components would have to be related to one another. A possible explanation is that the same neural structures—for example, those in face-responsive regions of fusiform gyrus—were at least partially activated for both analyzing the structure of faces and for encoding faces into long-term memory (Deffke et al., 2007; Lehmann et al., 2004).

Correlations were small between priming effects for learned and unfamiliar faces and were moderate between the ERE and the LRE for learned faces. These findings confirm previous results (Herzmann & Sommer, 2007; Herzmann et al., 2004; Pfütze et al., 2002; Schweinberger et al., 2002) that suggested that different neural generators are responsible for familiar and unfamiliar faces and for the ERE and LRE.

Methodological Reflections

In contrast to previous studies that elucidated brain–behavior relationships of single neurocognitive processes (Rotshtein et al., 2007; Schretlen et al., 2001; Alexander et al., 1999), the present study linked several neurocognitive processes, generated in specific brain systems, to variations in face cognition abilities on the behavioral level. We showed that ERP components, like other behavioral ability measures, possess the same psychometric qualities: high internal consistencies and unidimensionality. They can thus be used as neurocognitive indicators of individual differences in the temporal dimensions and in the activation of neural subprocesses.

The present results go beyond previously established brain–behavior relationships by estimating relationships between neurocognitive and behavioral indicators as latent constructs measured in independent experimental tasks. Latent variable techniques, unlike bivariate correlations, account explicitly for measurement errors. All other things being equal, correlations between latent variables are higher than correlations between manifest variables, as can be seen in the present data. Modeled as latent variables, the N170 latency contributed to face cognition accuracy (−.30); the amplitude of the ERE for learned faces contributed to face cognition accuracy (.41) and to face cognition speed (.46). The size of these relationships was diminished when calculating correlations between manifest variables. The correlation between the N170 latency and face cognition accuracy dropped to −.24, and the correlations involving the amplitude of the ERE for learned faces diminished to .34 for face cognition accuracy and to .37 for the speed of face cognition.

Evidence for the notion that correlations between dependent variables are biased and unreliable can also be found in the present study. Correlations between ERP components and memory performance in the same experiment exceeded the correlations between independently measured variables. For example, we found no evidence for a relationship of the Dm with face cognition. But the Dm latency and amplitude showed high correlations to memory performance within the Dm experiment (rs = −.51 and .50, respectively). A similar result was found for the latency and amplitude of the LRE for learned faces, which showed higher correlations to memory performance in the priming experiment (rs = −.62 and .45, respectively) than to the abilities of face cognition.

Conclusions

Individual differences in neurocognitive indicators of subprocesses in face cognition were moderately related to individual differences in face cognition abilities obtained in independent tasks. Although one might expect and sometimes find stronger relationships, these results are in the range of previously found correlations between neural processing and mental abilities when measured independently (Jolij et al., 2007; Schretlen et al., 2001). The finding that only a small portion of individual differences in face cognition could be explained by ERP-related cognitive subprocesses may be due to several facts. First, individual differences on the behavioral level can be taken as the consequence of multiple, interacting neural processes. Because ERP components represent individual differences in a single subprocess of face cognition, their contribution to individual differences on the behavioral level can only be partial. Expecting a very close relationship between ERP components and mental abilities would assume that a single neural process in a circumscribed brain area is responsible for these abilities. Such a view clearly neglects evidence of interacting neural networks for complex cognitive abilities (Gobbini & Haxby, 2007). Second, not all processes underlying the individual variation of face cognition on the behavioral level can be measured by ERP components. For an ERP to be recordable at the scalp, simultaneous activity of a vast number of suitably aligned neurons is required. Although this is often the case for cortical sources, it may not hold true for subcortical sources (Wood & Allison, 1981). Third, a considerable portion of variance in face cognition abilities was shown to be explained by general cognitive abilities (Wilhelm et al., in press). Because the amplitude of the ERE for learned faces and the latency and amplitude of the LRE for learned faces are specific for face cognition, the contribution of these components to face cognition abilities can only account for their face-specific portion.

The present study adds to face cognition research in several ways: It is based on a comprehensive model of face cognition, used multiple indicators of performance, established measurement models of ERP components, used a large sample of participants, and measured behavioral and neurocognitive indicators in independent tasks. We are the first to use ERP components in structural equation models and to show that a substantial part of individual variation in face cognition on the behavioral level is caused by individual differences in the speed and efficiency of activating memory representations of faces and of person-related knowledge. Our approach can be easily transferred to other aspects of face processing such as the ability to process facial expressions or the ability to retrieve person-related knowledge. It could also be applied to individual differences in certain personality traits or to any number of other cognitive abilities.

APPENDIX

The appendix briefly describes the behavioral test study and all tasks taken from it for the present study. More detailed information can be found in Wilhelm et al. (in press) and Herzmann et al. (2008). The indicators for face perception and face memory, which were combined to the single ability of face accuracy during the course of the study, are described separately. Abbreviations of face cognition abilities refer to Figure 2.

Face Perception

FP 1 and FP 2: Sequential Matching of Part–Whole Faces—Conditions Part (FP 1) and Whole (FP 2)

This task used the part–whole recognition effect (Tanaka & Farah, 1993). Participants saw a target face. A facial feature from the target (e.g., its eyes) was then presented along with the same feature from a different face. Facial features were presented either in isolation (part condition) or in the context of the whole target face (whole condition). In the whole condition, the task was to discern which face was the target. In the part condition, the feature belonging to the target had to be identified. Part and whole conditions were taken as separate indicators.

FP 3 and FP 4: Simultaneous Matching of Spatially Manipulated Faces—Conditions Upright (FP 3) and Inverted (FP 4)

This task used the inversion effect (Yin, 1969). Participants saw two simultaneously presented faces and had to indicate whether the faces were the same or different. The faces were always derived from the same portrait; in the case of “different” faces, however, one spatial relationship between features (eyes–nose or mouth–nose relation) was changed from the original. Faces were presented either upright or inverted (i.e., upside down). Upright and inverted conditions were taken as separate indicators.

FP 5: Facial Resemblance

A target face in three-quarter view was presented centered above two morphed faces depicted in frontal view. The morphed faces were derived from the original target face and a second, unfamiliar face. Participants had to decide which of the morphs resembled the target face more.

Face Memory

FM 1: Acquisition Curve

Participants were instructed to learn a total of 30 faces all shown together on the screen. The test phase began directly afterward and comprised five runs. Each run tested for recognition of all learned target faces, presented one at a time alongside an unfamiliar face. Immediately after the participants responded, the target face was highlighted by a green frame, regardless of the accuracy of the response. This ensured long-term learning for task FM 2.

FM 2: Decay Rate of Learned Faces

The faces learned during FM 1 were tested for recognition after approximately 2.5 hours. Participants saw each learned face alongside an unfamiliar one and had to indicate which face was the target.

FM 3: Eyewitness Testimony

This task measured implicit learning. Participants were shown two faces and asked to indicate which had been a distracter a few minutes earlier in task FCS 2.

Face Cognition Speed

FCS 1: Recognition Speed of Learned Faces

This task had four runs, each consisting of a learning phase, a delay (during which participants completed items of GCA 1), and the recognition phase. In each learning phase, four simultaneously shown faces had to be memorized. In each recognition phase, the four learned faces and four new ones were shown one at a time. Participants had to indicate whether the presented face was learned or new.

FCS 2: Delayed Nonmatching to Sample

Participants were shown a target face. After a 4-sec delay, the target face was presented together with an unfamiliar face. Participants had to indicate which face was new.

FCS 3: Simultaneous Matching of Faces from Different Viewpoints

Participants saw two faces, one in frontal view and the other in three-quarter view, and then had to indicate whether or not the faces depicted the same person.

FCS 4 and FCS 5: Simultaneous Matching of Upper Face-halves—Conditions Aligned (FCS 4) and Nonaligned (FCS 5)

This task used the composite-face effect (Young, Hellawell, & Hay, 1987). Faces were divided horizontally into upper and lower halves. In the aligned condition, face-halves were coupled with a complementary half from another face to form a new “face.” For the unaligned condition, faces were joined so that the left or right face-edge of the top face-half was positioned above the nose of the bottom face-half. The task was to decide whether the top halves of two simultaneously presented faces were the same or different; lower halves were always different. Aligned and nonaligned conditions were taken as separate indicators.

FCS 6: Simultaneous Matching of Morphs

Participants were shown two nonidentical, morphed faces derived from the same parent faces and had to decide if the faces were similar or dissimilar. Faces in the similar trials were closer to each other on the morphing continuum than were dissimilar faces.

General Cognitive Ability

GCA 1: Raven Advanced Progressive Matrices

Sixteen odd-numbered items from the original full test (Raven, Court, & Raven, 1979) were included in this task.

GCA 2: Memory Updating

Participants had to remember a series of digits and mentally “update” (i.e., modify) the digits according to instructions (i.e., adding or subtracting numbers).

GCA 3: Rotation Span

Participants performed a letter-rotation task while learning and recalling a sequence of short and long arrows that radiated out from the center of a circle in eight directions.

Immediate and Delayed Memory

Three tasks (Indicators IDM 1 to IDM 6), each with immediate (IDM 1, 3, and 5) and delayed recall (IDM 2, 4, and 6), were adapted from the Wechsler Memory Scale (Härting et al., 2000) and computerized.

Object Cognition

Five tasks (Indicators OC 1 to OC 5) measured object cognition. These tasks were procedurally identical with the corresponding tasks for face cognition (Indicators OC 1 and 2 corresponded to Indicators FP 1 and 2; Indicators OC 3 and 4 to Indicators FP 3 and 4; and Indicator OC 5 to Indicator FM 3). Gray-scale pictures of houses were substituted for face portraits.

Mental Speed

MS 1: Finding As

Participants had to decide whether or not meaningful German words presented on the screen contained an “A.”

MS 2: Symbol Substitution

Participants saw one of four symbols (?, +, %, $) on the screen and had to press the appropriate response-code corresponding to each symbol.

MS 3: Number Comparison

Participants were shown two number strings of 3 to 13 digits and had to decide whether they were identical or not.

Acknowledgments

We thank Doreen Brendel, Dominika Dolzycka, Inga Matzdorf, and Kathrin Müsch for help with data acquisition; Timothy David Freeze for improvements on language; and four anonymous reviewers for helpful comments on earlier drafts of this manuscript. This research was supported by a grant from the State of Berlin (NaFöG) to G. H. and by a grant from the Deutsche Forschungsgemeinschaft (Wi 2667/2-2) to O. W. and W. S.

Reprint requests should be sent to Grit Herzmann, Department of Psychology and Neuroscience, University of Colorado, 345 UCD, Boulder, CO 80309, or via e-mail: grit.herzmann@googlemail.com.

Notes

1. 

A chi-square test is a function of sample size and of the difference between the observed and theoretical covariance matrices. The significance level of the chi-square test is compared with the corresponding value of the chi-square distribution and should not be higher than a p value of .05. One problem of the chi-square characteristic is its sensitivity to sample size. Statistical significance can be observed with large sample sizes even when the deviation between the theoretical and observed models is negligible with respect to content (Gigerenzer, Krauss, & Vitouch, 2004). All other things being equal, the lower the chi-square statistic, the better the model fit. RMSEA values should not exceed .05; they remain acceptable between .05 and .08 and become unacceptable above .08 (Hu & Bentler, 1999). CFI values of .95 or higher indicate good model fit, whereas CFI values below .90 usually lead to the rejection of the model.

2. 

The P100 or N170 could be obtained from study or recognition blocks of the Dm experiment or from the priming experiment. The criteria used to select the P100 and N170 amplitude and latency measures for correlational analyses were high psychometric quality (as indicated by high internal consistency) and good fit of their measurement models. Although the data were down-sampled to 200 Hz, the same results were obtained with a time resolution of 1000 Hz.

3. 

The Dm, ERE, and LRE were measured with an average reference that sets the mean activity across all electrodes to zero. Because all electrodes were used in ANOVAs, condition effects in these analyses are only meaningful in interaction with electrode sites. Therefore, we only report such interactions and, for brevity's sake, do not mention the electrode factor.

4. 

For EREs and LREs, differences between topographies were analyzed by scaling the difference waveforms for each participant to the same overall amplitude within each condition. The divisor was the average distance of the mean, derived from the individual mean ERPs (Haig, Gordon, & Hook, 1997).

5. 

When these ERP components were determined in the priming experiment, correlational results for the P100 and the N170 were essentially the same; and the N170 latency revealed a moderate negative contribution of −.29 to face cognition accuracy. No other parameters showed any contribution.

REFERENCES

Alexander
,
G. E.
,
Mentis
,
M. J.
,
Van Horn
,
J. D.
,
Grady
,
C. L.
,
Berman
,
K. F.
,
Furey
,
M. L.
,
et al
(
1999
).
Individual differences in PET activation of object perception and attention systems predict face matching accuracy.
NeuroReport
,
10
,
1965
1971
.
Arbuckle
,
J. L.
(
2007
).
Amos (Version 7.0.) [Computer software].
Chicago, IL
:
SPSS
.
Beauchamp
,
C. M.
, &
Stelmack
,
R. M.
(
2006
).
The chronometry of mental ability: An event-related potential analysis of an auditory oddball discrimination task.
Intelligence
,
34
,
571
586
.
Bentin
,
S.
,
Allison
,
T.
,
Puce
,
A.
,
Perez
,
E.
, &
McCarthy
,
G.
(
1996
).
Electrophysiological studies of face perception in humans.
Journal of Cognitive Neuroscience
,
8
,
551
565
.
Berg
,
P.
, &
Scherg
,
M.
(
1994
).
A multiple source approach to the correction of eye artifacts.
Electroencephalography and Clinical Neurophysiology
,
90
,
229
241
.
Bollen
,
K. A.
(
1989
).
Structural equations with latent variables.
New York
:
Wiley
.
Brandeis
,
D.
,
Naylor
,
H.
,
Halliday
,
R.
,
Callaway
,
E.
, &
Yano
,
L.
(
1992
).
Scopolamine effects on visual information processing, attention, and event-related potential map latencies.
Psychophysiology
,
29
,
315
336
.
Bruce
,
V.
, &
Young
,
A.
(
1986
).
Understanding face recognition.
British Journal of Psychology
,
77
,
305
327
.
Clark
,
V. P.
,
Keil
,
K.
,
Maisog
,
J. Ma.
,
Courtney
,
S.
,
Ungerleider
,
L. G.
, &
Haxby
,
J. V.
(
1996
).
Functional magnet resonance imaging of human visual cortex during face matching: A comparison with positron emission tomography.
Neuroimage
,
4
,
1
15
.
Coles
,
M. G. H.
(
1989
).
Modern mind-brain reading: Psychophysiology, physiology, and cognition.
Psychophysiology
,
26
,
251
269
.
Deffke
,
I.
,
Sander
,
T.
,
Heidenreich
,
J.
,
Sommer
,
W.
,
Curio
,
G.
,
Trahms
,
L.
,
et al
(
2007
).
MEG/EEG sources of the 170-ms response to faces are co-localized in the fusiform gyrus.
Neuroimage
,
35
,
1495
1501
.
Doi
,
H.
,
Sawada
,
R.
, &
Masataka
,
N.
(
2007
).
The effects of eye and face inversion on the early stages of gaze direction perception—An ERP study.
Brain Research
,
1183
,
83
90
.
Duchaine
,
B.
, &
Nakayama
,
K.
(
2005
).
Dissociations of face and object recognition in developmental prosopagnosia.
Journal of Cognitive Neuroscience
,
17
,
249
261
.
Eger
,
E.
,
Schweinberger
,
S. R.
,
Dolan
,
R. J.
, &
Henson
,
R. N.
(
2005
).
Familiarity enhances invariance of face representations in human ventral visual cortex: fMRI evidence.
Neuroimage
,
26
,
1128
1139
.
Endl
,
W.
,
Walla
,
P.
,
Lindinger
,
G.
,
Lalouschek
,
W.
,
Barth
,
F. G.
,
Deecke
,
L.
,
et al
(
1998
).
Early cortical activation indicates preparation for retrieval of memory for faces: An event-related potential study.
Neuroscience Letters
,
240
,
58
60
.
Engst
,
F. M.
,
Martín-Loeches
,
M.
, &
Sommer
,
W.
(
2006
).
Memory systems for structural and semantic knowledge of faces and buildings.
Brain Research
,
1124
,
70
80
.
Fabiani
,
M.
,
Karis
,
D.
, &
Donchin
,
E.
(
1986
).
P300 and recall in an incidental memory paradigm.
Psychophysiology
,
23
,
298
308
.
Gigerenzer
,
G.
,
Krauss
,
S.
, &
Vitouch
,
O.
(
2004
).
The null ritual: What you always wanted to know about significance testing but were afraid to ask.
In D. Kaplan (Ed.),
The SAGE handbook of quantitative methodology for the social sciences
(pp.
391
408
). Thousand Oaks, CA: Sage.
Gobbini
,
M. I.
, &
Haxby
,
J. V.
(
2007
).
Neural systems for recognition of familiar faces.
Neuropsychologia
,
45
,
32
41
.
Green
,
D. M.
, &
Swets
,
J. A.
(
1966
).
Signal detection theory and psychophysics.
New York
:
Wiley
.
Grill-Spector
,
K.
,
Henson
,
R.
, &
Martin
,
A.
(
2006
).
Repetition and the brain: Neural models of stimulus-specific effects.
Trends in Cognitive Sciences
,
10
,
14
23
.
Haig
,
A. R.
,
Gordon
,
E.
, &
Hook
,
S.
(
1997
).
To scale or not to scale: McCarthy and Wood revisited.
Electroencephalography and Clinical Neurophysiology
,
103
,
323
325
.
Härting
,
C.
,
Markowitsch
,
H. J.
,
Neufeld
,
H.
,
Calabrese
,
P.
,
Deisinger
,
K.
, &
Kessler
,
J.
(
2000
).
Die Wechsler-Memory-Scale—revised (German Adaptation).
Bern
:
Huber
.
Herzmann
,
G.
,
Danthiir
,
V.
,
Schacht
,
A.
,
Sommer
,
W.
, &
Wilhelm
,
O.
(
2008
).
Towards a comprehensive test battery for face processing: Assessment of the tasks.
Behavior Research Methods
,
40
,
840
857
.
Herzmann
,
G.
,
Schweinberger
,
S. R.
,
Sommer
,
W.
, &
Jentzsch
,
I.
(
2004
).
What's special about personally familiar faces? A multimodal approach.
Psychophysiology
,
41
,
688
701
.
Herzmann
,
G.
, &
Sommer
,
W.
(
2007
).
Memory-related ERP components for experimentally learned faces and names: Characteristics and parallel-test reliabilities.
Psychophysiology
,
44
,
262
276
.
Hu
,
L. T.
, &
Bentler
,
P. M.
(
1999
).
Cut-off criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives.
Structural Equation Modeling
,
6
,
1
55
.
Itier
,
R. J.
, &
Taylor
,
M. J.
(
2004
).
Effects of repetition learning on upright, inverted and contrast-reversed face processing using ERPs.
Neuroimage
,
21
,
1518
1532
.
Jolij
,
J.
,
Huisman
,
D.
,
Scholte
,
S.
,
Hamel
,
R.
,
Kemner
,
C.
, &
Lamme
,
V. A. F.
(
2007
).
Processing speed in recurrent visual networks correlates with general intelligence.
NeuroReport
,
18
,
39
43
.
Kress
,
T.
, &
Daum
,
I.
(
2003
).
Developmental prosopagnosia: A review.
Behavioural Neurology
,
14
,
109
121
.
Lehmann
,
C.
,
Mueller
,
T.
,
Federspiel
,
A.
,
Hubl
,
D.
,
Schroth
,
G.
,
Huber
,
O.
,
et al
(
2004
).
Dissociation between overt and unconscious face processing in fusiform face area.
Neuroimage
,
21
,
75
83
.
Leuthold
,
H.
, &
Sommer
,
W.
(
1998
).
Postperceptual effects and P300 latency.
Psychophysiology
,
35
,
34
46
.
Oldfield
,
R. C.
(
1971
).
The assessment and analysis of handedness: The Edinburgh inventory.
Neuropsychologia
,
9
,
97
113
.
Paller
,
K. A.
, &
Wagner
,
A. D.
(
2002
).
Observing the transformation of experience into memory.
Trends in Cognitive Sciences
,
6
,
93
102
.
Pfütze
,
E. M.
,
Sommer
,
W.
, &
Schweinberger
,
S. R.
(
2002
).
Age-related slowing in face and name recognition: Evidence from event-related brain potentials.
Psychology and Aging
,
17
,
140
160
.
Pivik
,
R. T.
,
Broughton
,
R. J.
,
Coppola
,
R.
,
Davidson
,
R. J.
,
Fox
,
N.
, &
Nuwer
,
M. R.
(
1993
).
Guidelines for the recording and quantitative-analysis of electroencephalographic activity in research contexts.
Psychophysiology
,
30
,
547
558
.
Polich
,
J.
(
2007
).
Updating p300: An integrative theory of P3a and P3b.
Clinical Neurophysiology
,
118
,
2128
2148
.
Raven
,
J. C.
,
Court
,
J. H.
, &
Raven
,
J.
(
1979
).
Manual for Raven's, Progressive Matrices and Vocabulary Scales.
London
:
H. K. Lewis & Co
.
Rotshtein
,
P.
,
Geng
,
J. J.
,
Driver
,
J.
, &
Dolan
,
R. J.
(
2007
).
Role of features and second-order spatial relations in face discrimination, face recognition, and individual face skills: Behavioral and functional magnetic resonance imaging data.
Journal of Cognitive Neuroscience
,
19
,
1435
1452
.
Russell
,
R.
,
Duchaine
,
B.
, &
Nakayama
,
K.
(
2007
).
Extraordinary face recognition ability.
48th Annual Meeting of the Psychonomics Society, Long Beach, CA, November 2007.
Schretlen
,
D. J.
,
Pearlson
,
G. D.
,
Anthony
,
J. C.
, &
Yates
,
K. O.
(
2001
).
Determinants of Benton Facial Recognition Test performance in normal adults.
Neuropsychology
,
15
,
405
410
.
Schweinberger
,
S. R.
,
Pickering
,
E. C.
,
Jentzsch
,
I.
,
Burton
,
A. M.
, &
Kaufmann
,
J. M.
(
2002
).
Event-related brain potential evidence for a response of inferior temporal cortex to familiar face repetitions.
Cognitive Brain Research
,
14
,
398
409
.
Sommer
,
W.
,
Schweinberger
,
S. R.
, &
Matt
,
J.
(
1991
).
Human brain potential correlates of face encoding into memory.
Electroencephalography and Clinical Neurophysiology
,
79
,
457
463
.
Tanaka
,
J. W.
, &
Farah
,
M. J.
(
1993
).
Parts and wholes in face recognition.
Quarterly Journal of Experimental Psychology
,
46A
,
225
245
.
Wilhelm
,
O.
,
Herzmann
,
G.
,
Kunina
,
O.
,
Danthiir
,
V.
,
Schacht
,
A.
, &
Sommer
,
W.
(
in press
).
Individual differences in face cognition.
Journal of Personality & Social Psychology
.
Wood
,
C. C.
, &
Allison
,
T.
(
1981
).
Interpretation of evoked potentials: A neurophysiological perspective.
Canadian Journal of Psychology
,
35
,
113
135
.
Yin
,
R. K.
(
1969
).
Looking at upside-down faces.
Journal of Experimental Psychology
,
81
,
141
145
.
Young
,
A. W.
,
Hellawell
,
D.
, &
Hay
,
D. C.
(
1987
).
Configurational information in face perception.
Perception
,
16
,
747
759
.