Abstract

“Tip-of-the-tongue” (TOT) is the phenomenon associated with the inaccessibility of a known word from memory. It is universally experienced, increases in frequency with age, and is most common for proper nouns. It is a good model for the symptom of anomia experienced much more frequently by some aphasic patients following brain injury. Here, we induced the TOT state in older participants while they underwent brain scanning with magnetoencephalography to investigate the changes in oscillatory brain activity associated with failed retrieval of known words. Using confrontation naming of pictures of celebrities, we successfully induced the TOT state in 29% of trials and contrasted it with two other states: “Know” where the participants both correctly recognized the celebrity's face and retrieved their name and “Don't Know” when the participants did not recognize the celebrity. We wished to test Levelt's influential model of speech output by carrying out two analyses, one epoching the data to the point in time when the picture was displayed and the other looking back in time from when the participants first articulated their responses. Our main findings supported the components of Levelt's model, but not their serial activation over time as both semantic and motor areas were identified in both analyses. We also found enduring decreases in the alpha frequency band in the left ventral temporal region during the TOT state, suggesting ongoing semantic search. Finally, we identified reduced beta power in classical peri-sylvian language areas for the TOT condition, suggesting that brain regions that encode linguistic memories are also involved in their attempted retrieval.

INTRODUCTION

The tip-of-the-tongue (TOT) phenomenon was first mentioned as a psychological phenomenon in the 19th century by William James in his textbook, The Principles of Psychology. There are more modern descriptions (James, 1890), but James captures the essence: “Suppose we try to recall a forgotten name. The state of our consciousness is peculiar. There is a gap therein, but no mere gap. It is a gap that is intensely active. A sort of wraith of the name is in it, beckoning us in a given direction, making us at moments tingle with the sense of our closeness, and then letting us sink back without the longed-for term” (James, 1890, p. 251). It is a fascinating phenomenon and one that is almost universally experienced, with increasing frequency as we get older (Brown, 1991); it also provides a window into what it must be like to have pathological persistent and frequent word-finding difficulties, such as that experienced by patients with certain types of aphasia (Bruce & Howard, 1988).

There are several neuropsychological models of speech output. The most influential and perhaps simplest is Levelt's serial model (Levelt, 1999) where, to speak, one must pass through four key processes in order. First, one must activate the concept one wishes to express and then find the phonological form to express this. There is disagreement over whether these two processes are distinct (Caramazza, 1997), but a multilingual participant could express the same idea in two different languages, for instance, selecting two different phonological tokens to express the same concept. The last two processes take the most amount of time: phonological assembly followed by the initiation of the articulatory process itself. There are other processes involved that “close the loop,” including self-monitoring and repair of speech errors, but these occur once speech output commences and are not examined by this study. For Levelt, the TOT state occurs as a hang up between the first and second stages. There appear to be bridging elements present (which is evidence against a purely serial processing model) such as knowledge about the length or grammatical form of the missing item (e.g., in romance-based languages where nouns are gendered, people in a TOT state often know the gender of the noun even if they cannot access its exact form; Vigliocco, Antonini, & Garrett, 1997). We wished to investigate the TOT phenomenon using Levelt's model as a guide. We wanted to image neural activity both at the time just after the stimulus is shown and also before articulation to try and capture the “where” of the TOT state. If Levelt's model is correct, we expected to find more activity in semantic association and word form retrieval areas in the first analysis and more activity in motor programming/articulation areas in the second. Having crudely split Levelt's model into halves, we need to add that for the first half (semantic association and word retrieval) the evidence points toward several related processes occurring in parallel. For instance, an ERP study of picture naming from Levelt's group demonstrated that phonological and semantic retrieval occur in parallel (Rahman, van Turennout, & Levelt, 2003). Another important, related cognitive process co-occurring at this time is cognitive control, an executive network that includes the ACC and bilateral insulae and dorsolateral prefrontal cortices (Ham, Leff, de Boissezon, Joffe, & Sharp, 2013). Given that these processes are likely to interact (participants may use their cognitive control system to switch between a semantic and a phonological search in a TOT state) as well as overlap in time, we will not be able to resolve them from our experimental design alone; rather, we will rely on anatomical evidence from previous studies to help sort them.

Many studies, like ours, use people's faces to provoke a TOT state. Proper nouns in general and names in particular are particularly susceptible to TOT, perhaps because they are rather arbitrarily linked with the person that they are associated with (Burke, Mackay, Worthley, & Wade, 1991). Models of speech production and face naming have different origins in cognitive psychology but have been bought together in an integrative model that includes many of Levelt's assumptions (Valentine, Brennen, & Bredart, 1996). Most of the research on the TOT state so far has been done in the field of cognitive psychology (Shafto, Stamatakis, Tam, & Tyler, 2010; Gollan & Brown, 2006; Schwartz & Frazier, 2005; Brown & Nix, 1996; Brown, 1991), with only a few studies exploring the neural systems underlying it. Most have been EEG studies that provide useful information about the temporal characterization of the brain activity but are less able to provide spatial information about the possible sources of neural activation involved in successful or failed word retrieval (Lindin, Diaz, Capilla, Ortiz, & Maestu, 2010; Galdo-Alvarez, Lindin, & Diaz, 2009; Diaz, Lindin, Galdo-Alvarez, Facal, & Juncos-Rabadan, 2007). Several studies have used fMRI to examine TOT states, but the reduced temporal resolution has made it hard to differentiate linguistic retrieval process from the (presumably later) processes mediating speech output and conflict control. The neural networks most commonly identified in such studies tend to be ACC-pFC cognitive control circuit engaged in monitoring conflict. Activity in this network is unlikely to be the cause of the TOT phenomenon (Maril, Simons, Weaver, & Schacter, 2005; Kikyo, Ohki, & Sekihara, 2001; Maril, Wagner, & Schacter, 2001). Magnetoencephalography (MEG) offers the temporal resolution to dissociate semantic and phonemic search processes from those dedicated to motor output and cognitive control.

Lindin published the first MEG study attempting to identify the network of brain areas, which show significantly different activation during successful naming “Know” condition and failed naming “TOT condition” (Lindin et al., 2010). Significantly greater activation was found in prefrontal and temporal regions, dominantly in the left hemisphere for “Know” than in “TOT” trials, in the intervals between 210 and 520 msec poststimulus onset. The reverse contrast (TOT > Know) identified regions later in the epoch (580–820 msec) in left temporal and right frontal areas. This study, like previous EEG studies, employed an event-related analysis time-locked to stimulus onset. However, the sensation of TOT may not be time-locked to stimulus presentation and will vary between trials in intensity and, as such, may lead to an underestimate of neural activity (Tallon-Baudry & Bertrand, 1999). Because of this, we chose to employ an induced analysis (time–frequency), epoching both to stimulus presentation (“picture up”) and also to speech output (when a participant announces that they are in a TOT state). Ours is thus the first attempt to study oscillatory phenomena in the TOT state.

We chose to study an older population for two reasons: first, because TOT rate increases with age (Brown & Nix, 1996) and, second, because we plan to carry out a similar experiment on patients with anomic aphasia and, as stroke risk increases with age (Rothwell et al., 2005), we want to have an age-matched population for this follow-on study.

Although a spectral analysis of TOT data has not been performed previously, there have been several analyses carried out on data from experiments probing memory including recall of semantic memories, which are most relevant here. A consistent finding in these studies is task-related reduction in both the alpha and beta bands with increases in both theta and gamma power associated with successful recall (Hanslmayr et al., 2011; Luo, Zhang, Feng, & Zhou, 2010; Hanslmayr, Spitzer, & Bauml, 2009). Oscillatory analyses of language paradigms have suggested band-specific differences between retrieval processes (clearly relevant here), associated with changes in the alpha band (Bastiaansen & Hagoort, 2006), whereas processes of unification (less relevant here, to do with combining semantic and syntactic information) produce increases in the beta and gamma bands. Simple motor tasks (such as a button press) are associated with a premovement reduction in power followed by a postmovement increase or “rebound” in power, usually in the upper alpha or beta bands (Pfurtscheller & da Silva, 1999). However, more complex sequences of movement also give rise to increases in gamma power (Kristevafeige, Feige, Makeig, Ross, & Elbert, 1993). Given this, we made the following predictions: (1) decreased alpha- and beta-band power associated with the three tasks in the following order, TOT (least power) < Know (middle) < DK (most power), and (2) more motor-related changes in power (reduced alpha or beta, increased gamma) before speaking in motor areas associated with speech production (bilateral sensorimotor cortex and rostral cerebellum).

METHODS

Participants

A group of 10 older participants, five men and five women, with a mean age of 70.2 (SD = 4.9) participated in this study. All participants were right-handed and had normal or corrected-to-normal vision (using contact lenses). All participants were healthy and did not have any preexisting neurological or psychiatric disorder. Two participants were excluded from the analysis where their MEG data were epoched to speech output because of the poor quality of their speech recordings. All provided written consent, and the study was approved by the Joint UCL/UCLH Ethics Committee.

Stimuli

Pictures of 300 celebrities were hand-picked from the Internet. The images were edited, cropped, and adjusted to a gray scale and then resized using Matlab to achieve the same resolution. All images were the same height (400 pixels), but their width slightly varied to avoid distorting the face. The facial expressions were not controlled; instead, characteristics associated with the famous person were encouraged. There was an emphasis on celebrities who had not been in the news recently to increase the number of TOT responses (Brown, 1991). Pilot testing indicated that, of the 300 pictures, there were 34 more universally famous faces that were easier to name, for example, Elvis Presley, Marilyn Monroe, Winston Churchill, and Margaret Thatcher. To avoid participants being stuck in a perpetual TOT state, a stimulus from this “easy” category was presented to participants after each TOT trial until this list ran out.

MEG Scanning and Stimulus Presentation

Participants were comfortably seated in a VSMMedTech Omega 275 MEG scanner consisting of 275 third-order axial gradiometers arranged around the head. The sample rate was 600 Hz with an antialiasing filter of 150 Hz. They were asked to keep their heads as still as possible. They responded verbally to the pictures of 300 famous faces divided into five runs of approximately 8 min of continuous data acquisition, with short rest breaks in-between. The ISI was in part controlled by the participant, so there was no upper limit for a response, but there was a lower limit of 8 sec. To ensure that participants understood the task, they were exposed to a practice data set, which was not later included in the testing data set. The stimuli were not randomized across participants. Initial stimulus order was fixed, but the interjections of the “easy” stimuli meant that no two individuals saw the pictures in the same order. No stimuli were repeated.

Before every picture, a fixation cross appeared in the middle of the screen for 1000 msec. After this, a picture was displayed for 3 sec (Figure 1). The task was to respond to the picture with one of the three possible answers. If they recognized the celebrity in the picture and were able to retrieve his or her name, they were requested to say the name out loud as soon as they knew this (a “Know” trial). If they did not recognize the celebrity at all or they did recognize him or her but did not know his or her name, they were asked to reply with “No” as soon as they knew this (a “Don't Know” [DK] trial). Finally, if the participants recognized the celebrity and thought they knew his or her name but were unable to retrieve it, they were requested to respond with a “Yes” (a “TOT” trial). Their speech was recorded through an MEG-compatible microphone and stored digitally. Unfortunately, for two participants, these data were corrupted, and they had to be excluded from the second analysis (when MEG data were epoched to speech output).

Figure 1. 

Stimulus presentation and recording of participants' speech. A cross was displayed (lowest row) 1 sec before the picture of the to-be-named celebrity. The picture came on at time = 0. This time point, which was the same for all trials, was the epoching point for the first analysis. The picture was presented for 3 sec. Participants were asked to respond verbally in one of three ways: either with the name of the celebrity (a “Know” trial), say “No” if they did not recognize the celebrity or had no idea of his or her name (a “DK” trial), or say “Yes” if they experienced a TOT state (a TOT trial). The average RTs for each of the three trials are shown in black, gray, and light gray. These time points (different for each trial) were the epoching points for the second analysis.

Figure 1. 

Stimulus presentation and recording of participants' speech. A cross was displayed (lowest row) 1 sec before the picture of the to-be-named celebrity. The picture came on at time = 0. This time point, which was the same for all trials, was the epoching point for the first analysis. The picture was presented for 3 sec. Participants were asked to respond verbally in one of three ways: either with the name of the celebrity (a “Know” trial), say “No” if they did not recognize the celebrity or had no idea of his or her name (a “DK” trial), or say “Yes” if they experienced a TOT state (a TOT trial). The average RTs for each of the three trials are shown in black, gray, and light gray. These time points (different for each trial) were the epoching points for the second analysis.

After every incidence of a TOT trial, participants were required to rate their confidence in their knowledge about the correct name of given celebrity on the scale from 1 to 10, where a 10 would be “I am absolutely confident I know the name.” After the MEG data were collected, we performed a debrief, going through all the TOT pictures and eliminating all false TOT states where the participant had mistakenly thought of another celebrity or actually did not know his or her name despite phonemic and semantic prompting. These trials (8%) were rejected from all further analyses. There was no formal trial rejection after this point as we were using a beamforming technique, which has good immunity to both external and physiological noise sources (Cheyne, Bostan, Gaetz, & Pang, 2007). The data passed onto the beamformer analysis were therefore broadband from 0 to 150 Hz.

Beamforming

First-level analysis was performed using statistical parametric mapping, SPM8 software (Wellcome Trust Centre for Neuroimaging, London, UK. fil.ion.ucl.ac.uk/spm) where a linear-constraint minimum variance (LCMV) beamformer was used to test the effects of interest for each individual participant. For the second-level statistical analysis, the nonparametric mapping was performed using SnPM5 toolbox for SPM8 running under Matlab 7.11 (R2010b; Mathworks, Inc., Sherborn, MA). We used a single sample pseudo t test based on subtractions between pairs of conditions across participants. Beamforming does not require data to be both phase- and time-locked to the stimulus onset and does not require a priori assumptions about the number of sources to model (Huang et al., 2004; Barnes & Hillebrand, 2003). We used the LCMV beamformer implemented in SPM8 with zero regularization to sample a volumetric grid with 5-mm spacing. The common data covariance window was constructed over two 1000-msec windows comprising conditions to be compared. A common spatial filter for both conditions was computed based on this covariance matrix. Orientation at each source voxel was computed using the method of Sekihara, Nagarajan, Poeppel, and Marantz (2004). At each grid location, a pseudo-t or normalized power difference between two conditions was calculated from the difference in projected power between conditions divided by the projected sensor noise (based on the lowest eigenvalue). In the current study, two between-condition source space analyses were performed using the LCMV beamformer. Given the average RTs for each condition (see Results section, but all >2.2 sec) and given that we opted for an induced analysis (as the cognitive events in question were not strictly time-locked to stimulus presentation), 1000 msec seemed a reasonable length of time to study to capture the cognitive events underlying TOT while avoiding changes in electrical activity because of articulatory motor preparation. First, we investigated the 1000 msec time window beginning at the time of stimulus presentation by epoching all events to when the picture was displayed “picture up” (see Figure 1; baseline was 1000 msec before the picture being displayed, when the fixation cross was present). In the second analysis, we looked back in time 1000 msec from when the participants first enunciated their responses by epoching the same MEG data to the onset of speech output as judged from the spectrogram (baseline was the 1000 msec after speech was initiated).

Statistical Nonparametric Mapping

Nonparametric statistics make fewer assumptions about the underlying distribution of the data than parametric ones. Rather than assuming that data are normally distributed, one can use the data itself to generate the null distribution through permutation testing (Nichols & Holmes, 2002). In the case of the one-sample t test, this involves randomly permuting the sign of the difference over participants and constructing a histogram of maximal t values. Once all the permutations (typically 256) are calculated, an estimate of the statistical distribution for the null hypothesis can be created. We also used variance smoothing (30 mm FWHM) to give a more robust, locally pooled, variance estimate at each voxel (Nichols & Holmes, 2002). We used 256 permutations for both analyzed conditions based on the size of full permutation set for the condition with fewest participants (n = 8).

For all the analyses here, we compared all combinations (three) of the pairs of conditions: TOT versus Know, TOT versus DK, and Know versus DK. For each comparison, we looked separately at the four main frequency bands, alpha (5–15 Hz), beta (15–30 Hz), gamma (30–70 Hz), and theta (1–5 Hz), totaling 12 analyses for each of the two experimental settings, time-locked to the picture stimuli (n = 10) and time-locked to the speech output (n = 8). For each of these planned analyses, a single statistical image was entered for each participant into a one sample pseudo t test on differences for a single-condition design (second-level analysis). As we had no prior hypotheses on the spectral changes we might expect, we opted for a small number of bands of relatively large bandwidth. The idea here was to reduce the number of planned comparisons, remove any interindividual variability in alpha frequency, improve the signal-to-noise ratio, and still cover a continuous spectral range (Brookes et al., 2008). However, our designations of alpha band in particular are broader than would normally be expected (5–15 rather than 8–12 Hz). To check, we recomputed the relative power change data in Figure 3 using classical (8–12 Hz) rather than (5–15 Hz) bands and found that the of time series of one predicted 91% of the variance in the other.

Time–Frequency Wavelet Plots

As mentioned above, all the statistical tests were group-level analyses performed on the beamformer data contrasting pairs of conditions. We also produced group grand-mean time–frequency plots from three of the key ROIs identified by the main analyses. These regions include the two classical peri-sylvian language regions (Broca's and Wernicke's areas) and an extra-sylvian region that many studies have identified as being important for verbal semantics (the left ventral temporal lobe). We did this by taking the per-subject (virtual electrode) time series estimates from the peak voxel identified in the group study. We then used a Morlet time–frequency decomposition (width, eight cycles) to create individual time–frequency spectrograms and averaged these to produce a grand mean. These are condition specific and display the spectral power over peristimulus time, which extends to the length of the whole trial (8000 msec) rather than segments that were used in statistical analyses (1000 msec; Figures 5 and 6).

RESULTS

Behavioral Data

The average proportion of responses for the group (n = 10) were as follows (SD): Know 37% (13), DK 26% (13), TOT 29% (8), and rejected trials 8% (4). The average RT was the fastest in the Know condition (2.23 sec, SD = 0.66), slower in the DK condition (2.65 sec, SD = 0.63). In the TOT condition, participants tended to wait until the picture disappeared from the screen, and thus, the average RT was 3.1 sec (SD = 0.85). The participants were confident of their TOT state, producing the following central tendency values: mean = 7.48; median = 8; mode = 8 (26% of responses). See Figure 2.

Figure 2. 

Average percentage of the TOT confidence ratings. Participants most frequently rated their level of confidence with 8 out of 10 (25.6%), and they almost never rated the TOT state with 1 (0.2%).

Figure 2. 

Average percentage of the TOT confidence ratings. Participants most frequently rated their level of confidence with 8 out of 10 (25.6%), and they almost never rated the TOT state with 1 (0.2%).

MEG Responses

For the picture up condition, there were 865, 1090, and 774 valid events for TOT, Know, and DK conditions, respectively. For the speech output condition, the numbers of valid events were 705 (TOT), 709 (Know), and 671 (DK).

Picture-related Analysis (Epoched to Stimulus Presentation)

The brain regions where significant differences between pairs of conditions occurred in the alpha (5–15 Hz), beta (15–30 Hz), and gamma (30–70 Hz) frequency bands are shown in Table 1 and Figure 3. No significant differences were found in the theta (1–5 Hz) band. As mentioned in the Introduction, we were expecting reductions in both the alpha and beta bands as evidence of task-related spectral power change, whereas for the gamma band we were expecting increases.

Table 1. 

Regions of Significant Peak Activation by Frequency Band and by Contrast

Participants (n = 10)

Brain Region
MNI Coordinates
Voxel Level
Cluster (mm3)
x
y
z
Pseudo-t
p FWE
1. TOT vs. Know 
5–15 Hz –       
15–30 Hz L Vent Temp −42 −8 −34 6.16 .002 2098 
30–70 Hz L cerebellum −12 −32 −40 4.95 .001 1979 
 
2. TOTvs. DK 
5–15 Hz L. Vent Temp −32 −18 −42 5.13 .001 734 
15–30 Hz –       
30–70 Hz –       
 
3. Know vs. DK 
5–15 Hz L Vent Temp −48 −22 −40 6.86 .001 685 
15–30 Hz Insula −42 −2 −26 5.15 .0049 1268 
30–70 Hz –       
Participants (n = 10)

Brain Region
MNI Coordinates
Voxel Level
Cluster (mm3)
x
y
z
Pseudo-t
p FWE
1. TOT vs. Know 
5–15 Hz –       
15–30 Hz L Vent Temp −42 −8 −34 6.16 .002 2098 
30–70 Hz L cerebellum −12 −32 −40 4.95 .001 1979 
 
2. TOTvs. DK 
5–15 Hz L. Vent Temp −32 −18 −42 5.13 .001 734 
15–30 Hz –       
30–70 Hz –       
 
3. Know vs. DK 
5–15 Hz L Vent Temp −48 −22 −40 6.86 .001 685 
15–30 Hz Insula −42 −2 −26 5.15 .0049 1268 
30–70 Hz –       

Data epoched to “picture up” (see Figure 3).

Figure 3. 

Brain areas that showed significant differences in activation between conditions when epoched to picture up. The ordering of conditions is such that the first one is associated with the significant change in power (e.g., in A, the TOT condition has reduced beta compared with Know; in B, the Know condition has increased gamma compared with TOT). Color indicates the frequency bands: Alpha = green, Beta = red, and Gamma = Blue. See Table 1 for MNI peak coordinates and regional anatomical labels.

Figure 3. 

Brain areas that showed significant differences in activation between conditions when epoched to picture up. The ordering of conditions is such that the first one is associated with the significant change in power (e.g., in A, the TOT condition has reduced beta compared with Know; in B, the Know condition has increased gamma compared with TOT). Color indicates the frequency bands: Alpha = green, Beta = red, and Gamma = Blue. See Table 1 for MNI peak coordinates and regional anatomical labels.

During the 1000 msec after the stimuli presentation, there was significantly (p < .001, whole-brain FWE-corrected) less beta activity in the left ventral temporal region for TOT than for Know and greater gamma activity in the left cerebellum for Know than for TOT. There was significantly less alpha activity in the left ventral temporal region for TOT than for DK and also for Know compared with DK. Finally, there was less beta activity in the insula in the Know compared with the DK condition (see Table 1 and Figure 3).

Speech-related Analysis (Epoched to Speech Output)

In the 1000 msec time window before speech output, there was significantly less alpha activity in the left lateral posterior cerebellum for Know than for TOT (Table 2 and Figure 4) and greater gamma activity in the left insula for Know than for TOT. There was less alpha activity in left inferior temporal gyrus for TOT than for DK and less beta activation in peri-sylvian language areas (left pars opercularis, left insula, left and right posterior STS) for the same contrast. The left cerebellum and left middle temporal gyrus had less alpha in the Know compared with the DK condition.

Table 2. 

Regions of Significant Peak Activation for Each Contrast and Frequency Band for that Contrast When Looking 1000 msec Back in Time

Participants (n = 8)

Brain Region
MNI Coordinates
Voxel Level
Cluster (mm3)
x
y
z
Pseudo-t
p FWE
1. Know–TOT 
5–15 Hz L lateral cerebellum −38 −66 −40 5.51 .016 75 
15–30 Hz –       
30–70 Hz L Insula −38 −6 12 4.46 .031 364 
 
2. TOT– DK 
5–15 Hz L Inf Temp Gyrus −54 −32 −30 4.92 .004 804 
15–30 Hz Pars opercularis/Insula −38 5.38 .004 998 
Sup Temp Sulcus 36 −32 18 5.14 .008 1130 
pSTS Angular Gyrus −42 −44 10 5.00 .008 384 
30–70 Hz –       
 
3. Know–DK 
5–15 Hz L cerebellum −54 −42 −26 6.86 .004 2085 
L mid Temp Gyrus −46 −74 4.98 .008 869 
15–30 Hz –       
30–70 Hz –       
Participants (n = 8)

Brain Region
MNI Coordinates
Voxel Level
Cluster (mm3)
x
y
z
Pseudo-t
p FWE
1. Know–TOT 
5–15 Hz L lateral cerebellum −38 −66 −40 5.51 .016 75 
15–30 Hz –       
30–70 Hz L Insula −38 −6 12 4.46 .031 364 
 
2. TOT– DK 
5–15 Hz L Inf Temp Gyrus −54 −32 −30 4.92 .004 804 
15–30 Hz Pars opercularis/Insula −38 5.38 .004 998 
Sup Temp Sulcus 36 −32 18 5.14 .008 1130 
pSTS Angular Gyrus −42 −44 10 5.00 .008 384 
30–70 Hz –       
 
3. Know–DK 
5–15 Hz L cerebellum −54 −42 −26 6.86 .004 2085 
L mid Temp Gyrus −46 −74 4.98 .008 869 
15–30 Hz –       
30–70 Hz –       

Epoched to the speech output (see Figure 4).

Figure 4. 

Brain areas that showed significant differences in activation between conditions 1000 msec before speech output for the frequency band. Ordering of conditions and color codes as for Figure 3: alpha = green, beta = red, and gamma = blue. See Table 2 for MNI peak coordinates and regional anatomical labels.

Figure 4. 

Brain areas that showed significant differences in activation between conditions 1000 msec before speech output for the frequency band. Ordering of conditions and color codes as for Figure 3: alpha = green, beta = red, and gamma = blue. See Table 2 for MNI peak coordinates and regional anatomical labels.

Time–Frequency Wavelet Plots

These plots show the strength of activation in three ROIs over time for single conditions rather than their comparisons (group grand average). The time period extends to the length of the whole trial (8000 msec) rather than segments that were used in statistical analyses (1000 msec). Note that the spectrograms consist of a transient burst of relatively broadband activity (corresponding to stimulus onset) and are then dominated by sustained alpha power, modulated to different degrees depending on the stimulus condition (Figures 5 and 6). The reduction in alpha power just after picture presentation in the TOT and Know conditions compared with the DK condition is clearly visible in Figure 5. In Figure 6, the reduced beta just before speech output for TOT versus DK is a little harder to appreciate (as the spectogram is epoched to picture up) but can be seen in the 2–3 sec poststimulus period in both posterior Broca's area and Wernicke's area. Sustained left hemisphere spectral changes like this have been reported in previous studies on visual language (reading; Goto et al., 2011; Bastiaansen, Magyari, & Hagoort, 2010).

Figure 5. 

(A) Beamformer extracted time–frequency wavelet plots (5–40 Hz) for the region of the left ventral temporal area (MNI: −41 −16 −39 part of the basal language area), which overlapped and was activated in all pairs of conditions in statistical nonparametric mapping analysis 1000 msec after picture presentation. Darker red and blue areas indicate increases and decreases relative to baseline power (−1 to 0 sec). (B) and (C) show relative power change (from baseline −1 to 0 sec) in narrow band beamformer extracted time series for this same region in 5–15 and 15–30 Hz bands, respectively. Note the reduced 5–15 Hz power for TOT and Know over DK trials (Figure 3B and C) and reduced 15–30 Hz power for Know versus TOT (Figure 3A) in the 0–1 sec time window. It is also interesting to note that the beta (15–30 Hz) power decrease is transient in DK and Know conditions but is sustained until the end of the trial in the TOT condition (as participants presumably search for the name).

Figure 5. 

(A) Beamformer extracted time–frequency wavelet plots (5–40 Hz) for the region of the left ventral temporal area (MNI: −41 −16 −39 part of the basal language area), which overlapped and was activated in all pairs of conditions in statistical nonparametric mapping analysis 1000 msec after picture presentation. Darker red and blue areas indicate increases and decreases relative to baseline power (−1 to 0 sec). (B) and (C) show relative power change (from baseline −1 to 0 sec) in narrow band beamformer extracted time series for this same region in 5–15 and 15–30 Hz bands, respectively. Note the reduced 5–15 Hz power for TOT and Know over DK trials (Figure 3B and C) and reduced 15–30 Hz power for Know versus TOT (Figure 3A) in the 0–1 sec time window. It is also interesting to note that the beta (15–30 Hz) power decrease is transient in DK and Know conditions but is sustained until the end of the trial in the TOT condition (as participants presumably search for the name).

Figure 6. 

Time–frequency wavelet plots (5–40 Hz frequency band) for the region of posterior Broca's area (top row) and Wernicke's area (bottom row). Darker red areas show high power, and dark blue shows low power. There was significantly less beta power observed in TOT condition than in the DK condition just before the speech output (Figure 4E). NB: picture presented at time 0 but statistical test on data 1 sec before speech output (2–3 sec after picture up).

Figure 6. 

Time–frequency wavelet plots (5–40 Hz frequency band) for the region of posterior Broca's area (top row) and Wernicke's area (bottom row). Darker red areas show high power, and dark blue shows low power. There was significantly less beta power observed in TOT condition than in the DK condition just before the speech output (Figure 4E). NB: picture presented at time 0 but statistical test on data 1 sec before speech output (2–3 sec after picture up).

To get a picture of the time–frequency dynamics underlying the changes we observed in the statistical analysis, we took an ROI at the average (over the three L Vent Temp coordinates in Table 1, picture up condition) location of the peak in the ventral temporal area and computed beamformer weights based on a −1 to 7 sec time window around stimulus onset for both broadband (5–40 Hz) and narrow band (5–15 and 15–30 Hz) frequency windows. These estimates were baseline-corrected based on 0–1 sec and wavelet-transformed using a 5 cycle Morlet window. Figure 5A shows the spectrogram of the broadband power change over time. For the narrow band signals, we simply summed the wavelet power estimates power across frequency bins (5–15 or 15–30 Hz) and plotted this power, relative to baseline (0–1 sec power) as a time series (Figure 5B,C).

DISCUSSION

Using a beamforming technique allowed us to detect changes in the power of cortical rhythms that were time-locked to the stimulus onset (picture up analysis), as well as those that were not time-locked (speech output analysis); the latter has not been reported before for TOT states. A potential down side of the technique means that data were averaged over longer time frames (a second) sacrificing some of the temporal resolution of MEG, although we used this information to divide task-related responses into spectral bins (Barnes & Hillebrand, 2003). By anchoring the two analyses to stimulus presentation and speech output, respectively, and by including conditions where knowledge was present and retrievable (Know), present and irretrievable (TOT), and not present (DK), it meant that we could investigate both early processes (concept generation and word finding) and later ones (motor plan generation and initiation of speech output) that appear in Levelt's speech production model. Interestingly, we identified some commonalities across both analyses, specifically alpha decrease for TOT versus DK in the left ventral temporal lobe.

The epochs we employed were quite long (1000 msec). In standard experiments of cued retrieval that probe Levelt's model, articulation is generally underway by 600 msec (Indefrey & Levelt, 2004); however, by concentrating on an older population and by using stimuli designed to induce the TOT state, this time course was stretched out for both the Know and DK trials to >2000 msec on average and >3000 msec on average for the TOT trails. Although we cannot be certain that this elongation in time affected all four components of Levelt's model equally, we think it is reasonable to conjecture that it might have.

One could argue that the DK trials have limited value as no recognition of the celebrity and recognition of the celebrity with ignorance about the name cannot be distinguished. Nevertheless these trial types are commonly reported (Bujan et al., 2010; Shafto et al., 2010; Bujan, Lindin, & Diaz, 2009; Diaz et al., 2007; Maril et al., 2001, 2005; Kikyo et al., 2001), and we report here for completeness.

We should note that, in this study, we characterized spectral change over a 1-sec window; although this might seem rather coarse, it is the price we have to pay for being able to characterize these changes in frequency. Indeed it is difficult to sustain the argument that all stimuli that give rise to a TOT state will do so in a strictly time-dependent fashion. Tallon-Baudry makes a strong case for induced response analyses in such cases (Tallon-Baudry & Bertrand, 1999). We think this characterization in terms of frequency is especially important given new theoretical and empirical insights that suggest feedforward connections to be characterized by generally higher frequency modulations than feedback (e.g., Bastos et al., 2012). Also, the beamformer formulation used throughout this manuscript is a data-dependent spatial filter, that is, the image structure is inhomogeneous and dependent on the time–frequency window used to construct the (beamformer) spatial filter (Barnes & Hillebrand, 2003). This necessarily means that the initial choice of time windows (and frequency bands) will affect the volumetric reconstruction. Too wide a window and one risks averaging over a number of nonstationary events; too narrow a window and one loses spatial resolution (Brookes et al., 2008).

Picture-related Analysis

Results of this analysis were dominated by differential activation of the left ventral temporal lobe. Consistent with our predictions, we saw a reduction in task-related spectral power in the alpha and beta bands for the TOT condition compared with the Know and DK conditions, with least activation (least reduction in power) in the DK condition. When in the TOT state, participants were probably more actively searching for semantic information to help support the goal of the task proper, which was name retrieval. This is supported by the comparison of TOT with DK, where the cognitive difference occurs at the level of recognition. In the DK state, the participants are unable to access rich semantic information about the celebrity because they don't know who they are. In the TOT state, it is possible that the participants were using a phonemic strategy to aid word retrieval; however, given where these activations appeared, it is more likely that they were engaging semantic strategies, although we did not specifically ask this in the posttest interview. The involvement of the left ventral temporal lobe in semantic processing has been shown in many experiments, particularly when language tokens (word forms) are involved (Visser & Lambon Ralph, 2011). When comparing grammatically correct sentences with plausible versus implausible meanings, neural activation has been found in the anterior middle temporal gyrus, revealing differences in semantic representations (Obleser & Kotz, 2010; Rogalsky & Hickok, 2009). In addition, a meta-analysis of 120 fMRI studies on semantic system neural network associated left ventral temporal lobe with semantic processing (Binder, Desai, Graves, & Conant, 2009). In terms of brain pathology, left temporal lobectomy (which includes parts of the ventral temporal lobe) is associated with anomia (Ralph, Ehsan, Baker, & Rogers, 2012). A recent study in patients with primary progressive aphasia demonstrated that face-naming impairments correlated with atrophy of the left anterior temporal lobe (Gefen et al., 2013). Conversely, activity in this region, also referred to as the “basal language area,” has been linked to semantic retrieval in response to phonological cues in both healthy controls and stroke patients (Sharp, Scott, & Wise, 2004).

Left ventral temporal lobe activation was similarly reported in the previous MEG study on TOT (Lindin et al., 2010). Lindin et al. only looked at two contrasts, whereas we examined six. We found differences in the left ventral temporal lobe for TOT versus DK as well as TOT versus Know. We replicated their result (Figure 3A, in the beta band) in our first analysis, which fits in with their finding of TOT > Know later in the epoch. However, we also found differences in or near this region for our other contrasts (TOT vs. DK and Know vs. DK). Contrary to results from this study, Lindin et al. found more activation in the Know than the TOT condition. However, they did detect this change in the time interval of 210–310 msec, which is only a proportion of the interval analyzed in here. It is possible that soon after the picture stimuli, more semantic retrieval is present at the Know condition than the TOT condition, but this fades with time for the Know condition as the retrieval process is successful. Therefore, we may have observed more activity in a similar region for TOT than Know because of our longer time frame, when semantic search for the TOT condition continues. We also analyzed much later into the epoch than Lindin in the second analysis, and this may explain why we identified classical peri-sylvian language areas.

The other activations identified by this analysis appeared in regions associated with motor speech output, which were expecting to find more in the later analysis. We found increased gamma activity in the left cerebellum for the Know versus TOT condition. The Know condition is the only one where the participants are able to produce a proper noun (the celebrity's name). Cerebellar activation is typically detected when speech is produced, underlying the motor aspects of the speech production (Price, 2010). As the analysis was limited to the first second after the picture stimuli, articulation was yet to start, but it may be that preparation had begun. Thus, more activation was found in the Know condition where the preparation for a complex multisyllabic spoken answer was probably already underway rather than a simple “Yes” (for TOT where the participants were probably still searching) or a simple “No,” which was probably being planned in the DK trials. Lastly, we identified less beta activity in the insula for the Know versus DK contrast, confirming findings from a recent TOT ERP study (Bujan et al., 2009). The insula is involved in articulation (Brown et al., 2009; Dronkers, 1996), but unlike the cerebellum, activation does not depend on the number of syllables being produced (Papoutsi et al., 2009).

Finally, it is also worth mentioning that in this study we used 10 participants, sufficient to identify numerous significant volume-corrected effects of the task; however, it is certainly possible that with more statistical power we might have seen activity in other candidate regions or time windows. For example, although it is always difficult to make much of a null result using frequentist statistics, unlike other studies (Shafto, Burke, Stamatakis, Tam, & Tyler, 2007; Maril et al., 2001), we did not identify the cognitive control network in any of our TOT contrasts. This network is clearly involved in all three tasks but should be more evident in the TOT state as shown by Maril et al. (2001). This may be a mode-of-imaging issue as ERP studies have not reported activity in the cognitive control network in TOT studies.

Speech-related Analysis

Results from the second analysis highlighted differences in neural activity 1 sec before speech onset. We were expecting to identify neural processes underpinning the latter parts of Levelt's model (phonological assembly and initiation of the articulatory process) and possibly the continuing search component for the TOT state, as for the other two states (Know and DK), the search should have been largely over.

Dealing with the activity identified in motor areas first, as expected, there was reduced alpha activity in the left cerebellum in the Know versus TOT condition (Figure 4A) and a similar reduction in alpha for Know versus DK (Figure 4E). There was greater gamma band activity in the left insula for Know over TOT. These results are compatible with the planning and preparation of the more complex speech output required where participants answered with celebrity's name rather than a monosyllabic word (“Yes” or “No”). The previously discussed assumption that activation in the left cerebellum decreases with familiarity (Moser et al., 2009) would explain why less activation was found when participants did not recognize or were not familiar with the person in the picture. The left insula activity is also consistent with evidence from several structural imaging studies, although its exact role in a Levelt-like model remains moot. These studies have been unable to definitively place the role of the insula on either side of the phonological retrieval versus articulatory planning of speech divide (Shafto et al., 2007; Hillis et al., 2004; Dronkers, 1996).

Unexpectedly, we found decreased activation in the beta band in the classical peri-sylvian language areas (pars opercularis, the posterior STS, and angular gyrus) for TOT over DK. Buchsbaum and D'Esposito (2009) have proposed that such activation reflects phonological retrieval processes, which automatically occur during successful recognition. Because both DK and TOT states are characterized by unsuccessful retrieval, more activation was expected in the Know state, with perhaps the TOT state having more activity than DK. This qualitative appeared to be the case with a similar beta decrease for both know and TOT (see Figure 6), but statistically, the only contrast to pass threshold was TOT versus DK. These regions have many functions and are associated with both semantic and phonological retrieval as well as preparation to speak (Price, 2010). Given that the participants were still “hung up” in the TOT state at this point, these activations most likely represent a linguistic search strategy (phonological retrieval) rather than processes involved in speech output.

Lastly, we also identified activity common to the first analysis with reduced alpha in the left ventral temporal lobe for TOT versus DK (Figure 4C compared with Figure 3C). A previous MEG study of reading found reduced alpha activity here when sensory stimuli are linked to semantic memories (Goto et al., 2011). We interpret this enduring response as reflecting continued semantic search (given the location of the activity) related to the celebrity when suspended in the TOT state. Given this and the peri-sylvian beta deactivations just mentioned, our results are most compatible with models that posit parallel activation of semantic and phonological information during face processing (Bujan, Galdo-Alvarez, Lindin, & Diaz, 2012; Rahman, Sommer, & Schweinberger, 2002).

General Conclusions

If Levelt's serial model for speech production holds true, then this study should have identified brain regions associated solely with early components (concept activation and phonological retrieval) in the picture-related analysis and solely late components (phonological assembly and initiation of the articulatory process) in the speech-related, with perhaps the exception of a continued search in the TOT state. However, brain regions associated with motor output (cerebellum and insula) were identified by both analyses, as were regions identified as supporting verbal semantic processing. This suggests that, assuming the basic cognitive components of Levelt's model are correct, there is cross talk between them during the process of naming a famous face. An analysis of effective connectivity between the regions identified by this study would be one way to take this hypothesis further.

The interpretation of the enduring responses seen in the ventral temporal lobe (alpha band) across both analyses as being related to semantic search are consistent with the aphasia literature. Aphasic stroke patients, who usually have damage to the left middle cerebral artery territory and thus spare the ventral temporal lobe, have naming problems mainly associated with poor retrieval; conversely, patients with semantic dementia, which affects the ventral temporal lobe, appear to have lost the long-term conceptual representations themselves (Jefferies & Lambon Ralph, 2006). This places the left ventral temporal lobe as a node in the language network that is likely to play an important role in anomia recovery in aphasic stroke.

Perhaps surprisingly, classical, peri-sylvian activity had not previously been identified in TOT studies to date. We did identify these regions in the second analysis (Figure 4E), probably because we epoched the data to speech output, which allowed us to follow the continuing “hung up” aspect of the TOT state further forward in time from stimulus presentation than previously. We identified reduced beta power in the left inferior frontal gyrus in the TOT state, which has been associated recently with semantic encoding (Hanslmayr et al., 2011). In our study, however, there was little in the way of encoding (the participants already knew the names of the TOT celebrities), which suggests that brain regions that encode memories are also involved in their attempted retrieval.

Future Directions

We would like to finish with a few suggestions on future work. First, we should note that we looked only at examined induced responses. It would be interesting to further characterize the balance between evoked and induced changes, for instance, in the insula and cerebellum. Second, we have focused on a simple division of Levelt's model into halves, but within the first half there are two main routes to successful naming, semantic and phonological. An imaging study that used stimuli or a framing paradigm that manipulated the cognitive loading onto one compared with the other might be able to tease the two apart in both space and time. Finally, we are interested in therapies for acquired language disorders that lead to TOT-like states (e.g., poststroke anomia). It would be interesting to understand, from a mechanistic point of view, how and where therapies interact with the residual functional naming network by comparing connectivity patterns before and after successful rehabilitation.

Acknowledgments

This work was supported by the Wellcome Trust, UK (ME033459MES) and by an MRC UK MEG Partnership Grant (MR/K005464/1). We wish to acknowledge our dear friend and colleague, Dr. Thomas Schofield, who helped design, pilot, and collect data for this study. Tom tragically lost his life in a road-traffic accident in 2010.

Reprint requests should be sent to Alex P. Leff, Institute of Cognitive Neuroscience, 17 Queen Square, Alexandra House, London, WC1N 3AR, UK, or via e-mail: a.leff@ucl.ac.uk.

REFERENCES

REFERENCES
Barnes
,
G. R.
, &
Hillebrand
,
A.
(
2003
).
Statistical flattening of MEG beamformer images.
Human Brain Mapping
,
18
,
1
12
.
Bastiaansen
,
M.
, &
Hagoort
,
P.
(
2006
).
Oscillatory neuronal dynamics during language comprehension.
Progress in Brain Research
,
159
,
179
196
.
Bastiaansen
,
M.
,
Magyari
,
L.
, &
Hagoort
,
P.
(
2010
).
Syntactic unification operations are reflected in oscillatory dynamics during on-line sentence comprehension.
Journal of Cognitive Neuroscience
,
22
,
1333
1347
.
Bastos
,
A. M.
,
Usrey
,
W. M.
,
Adams
,
R. A.
,
Mangun
,
G. R.
,
Fries
,
P.
, &
Friston
,
K. J.
(
2012
).
Canonical microcircuits for predictive coding.
Neuron
,
76
,
695
711
.
Binder
,
J. R.
,
Desai
,
R. H.
,
Graves
,
W. W.
, &
Conant
,
L. L.
(
2009
).
Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies.
Cerebral Cortex
,
19
,
2767
2796
.
Brookes
,
M. J.
,
Vrba
,
J.
,
Robinson
,
S. E.
,
Stevenson
,
C. M.
,
Peters
,
A. M.
,
Barnes
,
G. R.
,
et al
(
2008
).
Optimising experimental design for MEG beamformer imaging.
Neuroimage
,
39
,
1788
1802
.
Brown
,
A. S.
(
1991
).
A review of the tip-of-the-tongue experience.
Psychological Bulletin
,
109
,
204
223
.
Brown
,
A. S.
, &
Nix
,
L. A.
(
1996
).
Age-related changes in the tip-of-the-tongue experience.
American Journal of Psychology
,
109
,
79
91
.
Brown
,
S.
,
Laird
,
A. R.
,
Pfordresher
,
P. Q.
,
Thelen
,
S. M.
,
Turkeltaub
,
P.
, &
Liotti
,
M.
(
2009
).
The somatotopy of speech: Phonation and articulation in the human motor cortex.
Brain and Cognition
,
70
,
31
41
.
Bruce
,
C.
, &
Howard
,
D.
(
1988
).
Why don't Broca's aphasics cue themselves? An investigation of phonemic cueing and tip of the tongue information.
Neuropsychologia
,
26
,
253
264
.
Bujan
,
A.
,
Galdo-Alvarez
,
S.
,
Lindin
,
M.
, &
Diaz
,
F.
(
2012
).
An event-related potentials study of face naming: Evidence of phonological retrieval deficit in the tip-of-the-tongue state.
Psychophysiology
,
49
,
980
990
.
Bujan
,
A.
,
Lindin
,
M.
, &
Diaz
,
F.
(
2009
).
Movement related cortical potentials in a face naming task: Influence of the tip-of-the-tongue state.
International Journal of Psychophysiology
,
72
,
235
245
.
Burke
,
D. M.
,
Mackay
,
D. G.
,
Worthley
,
J. S.
, &
Wade
,
E.
(
1991
).
On the tip of the tongue—What causes word finding failures in young and older adults.
Journal of Memory and Language
,
30
,
542
579
.
Caramazza
,
A.
(
1997
).
How many levels of processing are there in lexical access?
Cognitive Neuropsychology
,
14
,
177
208
.
Cheyne
,
D.
,
Bostan
,
A. C.
,
Gaetz
,
W.
, &
Pang
,
E. W.
(
2007
).
Event-related beamforming: A robust method for presurgical functional mapping using MEG.
Clinical Neurophysiology
,
118
,
1691
1704
.
Diaz
,
F.
,
Lindin
,
M.
,
Galdo-Alvarez
,
S.
,
Facal
,
D.
, &
Juncos-Rabadan
,
O.
(
2007
).
An event-related potentials study of face identification and naming: The tip-of-the-tongue state.
Psychophysiology
,
44
,
50
68
.
Dronkers
,
N. F.
(
1996
).
A new brain region for coordinating speech articulation.
Nature
,
384
,
159
161
.
Galdo-Alvarez
,
S.
,
Lindin
,
M.
, &
Diaz
,
F.
(
2009
).
The effect of age on event-related potentials (ERP) associated with face naming and with the tip-of-the-tongue (TOT) state.
Biological Psychology
,
81
,
14
23
.
Gefen
,
T.
,
Wieneke
,
C.
,
Martersteck
,
A.
,
Whitney
,
K.
,
Weintraub
,
S.
,
Mesulam
,
M. M.
,
et al
(
2013
).
Naming vs knowing faces in primary progressive aphasia: A tale of 2 hemispheres.
Neurology
,
81
,
658
664
.
Gollan
,
T. H.
, &
Brown
,
A. S.
(
2006
).
From tip-of-the-tongue (TOT) data to theoretical implications in two steps: When more TOTs means better retrieval.
Journal of Experimental Psychology: General
,
135
,
462
483
.
Goto
,
T.
,
Hirata
,
M.
,
Umekawa
,
Y.
,
Yanagisawa
,
T.
,
Shayne
,
M.
,
Saitoh
,
Y.
,
et al
(
2011
).
Frequency-dependent spatiotemporal distribution of cerebral oscillatory changes during silent reading: A magnetoencephalograhic group analysis.
Neuroimage
,
54
,
560
567
.
Ham
,
T.
,
Leff
,
A.
,
de Boissezon
,
X.
,
Joffe
,
A.
, &
Sharp
,
D. J.
(
2013
).
Cognitive control and the salience network: An investigation of error processing and effective connectivity.
Journal of Neuroscience
,
33
,
7091
7098
.
Hanslmayr
,
S.
,
Spitzer
,
B.
, &
Bauml
,
K. H.
(
2009
).
Brain oscillations dissociate between semantic and nonsemantic encoding of episodic memories.
Cerebral Cortex
,
19
,
1631
1640
.
Hanslmayr
,
S.
,
Volberg
,
G.
,
Wimber
,
M.
,
Raabe
,
M.
,
Greenlee
,
M. W.
, &
Bauml
,
K. H. T.
(
2011
).
The relationship between brain oscillations and BOLD signal during memory formation: A combined EEG-fMRI study.
Journal of Neuroscience
,
31
,
15674
15680
.
Huang
,
M. X.
,
Shih
,
J. J.
,
Lee
,
R. R.
,
Harrington
,
D. L.
,
Thoma
,
R. J.
,
Weisend
,
M. P.
,
et al
(
2004
).
Commonalities and differences among vectorized beamformers in electromagnetic source imaging.
Brain Topography
,
16
,
139
158
.
Indefrey
,
P.
, &
Levelt
,
W. J.
(
2004
).
The spatial and temporal signatures of word production components.
Cognition
,
92
,
101
144
.
James
,
W.
(
1890
).
The principles of psychology
(
Vol. 1
).
New York
:
Henry Holt & Co.
Jefferies
,
E.
, &
Lambon Ralph
,
M. A.
(
2006
).
Semantic impairment in stroke aphasia versus semantic dementia: A case-series comparison.
Brain
,
129
,
2132
2147
.
Kikyo
,
H.
,
Ohki
,
K.
, &
Sekihara
,
K.
(
2001
).
Temporal characterization of memory retrieval processes: An fMRI study of the “tip of the tongue” phenomenon.
European Journal of Neuroscience
,
14
,
887
892
.
Kristevafeige
,
R.
,
Feige
,
B.
,
Makeig
,
S.
,
Ross
,
B.
, &
Elbert
,
T.
(
1993
).
Oscillatory brain activity during a motor task.
NeuroReport
,
4
,
1291
1294
.
Levelt
,
W. J. M.
(
1999
).
Models of word production.
Trends in Cognitive Neuroscience
,
3
,
223
232
.
Lindin
,
M.
,
Diaz
,
F.
,
Capilla
,
A.
,
Ortiz
,
T.
, &
Maestu
,
F.
(
2010
).
On the characterization of the spatio-temporal profiles of brain activity associated with face naming and the tip-of-the-tongue state: A magnetoencephalographic (MEG) study.
Neuropsychologia
,
48
,
1757
1766
.
Luo
,
Y.
,
Zhang
,
Y.
,
Feng
,
X.
, &
Zhou
,
X.
(
2010
).
Electroencephalogram oscillations differentiate semantic and prosodic processes during sentence reading.
Neuroscience
,
169
,
654
664
.
Maril
,
A.
,
Simons
,
J. S.
,
Weaver
,
J. J.
, &
Schacter
,
D. L.
(
2005
).
Graded recall success: An event-related fMRI comparison of tip of the tongue and feeling of knowing.
Neuroimage
,
24
,
1130
1138
.
Maril
,
A.
,
Wagner
,
A. D.
, &
Schacter
,
D. L.
(
2001
).
On the tip of the tongue: An event-related fMRI study of semantic retrieval failure and cognitive conflict.
Neuron
,
31
,
653
660
.
Nichols
,
T. E.
, &
Holmes
,
A. P.
(
2002
).
Nonparametric permutation tests for functional neuroimaging: A primer with examples.
Human Brain Mapping
,
15
,
1
25
.
Obleser
,
J.
, &
Kotz
,
S. A.
(
2010
).
Expectancy constraints in degraded speech modulate the language comprehension network.
Cerebral Cortex
,
20
,
633
640
.
Papoutsi
,
M.
,
de Zwart
,
J. A.
,
Jansma
,
J. M.
,
Pickering
,
M. J.
,
Bednar
,
J. A.
, &
Horwitz
,
B.
(
2009
).
From phonemes to articulatory codes: An fMRI study of the role of Broca's area in speech production.
Cerebral Cortex
,
19
,
2156
2165
.
Pfurtscheller
,
G.
, &
da Silva
,
F. H. L.
(
1999
).
Event-related EEG/MEG synchronization and desynchronization: Basic principles.
Clinical Neurophysiology
,
110
,
1842
1857
.
Price
,
C. J.
(
2010
).
The anatomy of language: A review of 100 fMRI studies published in 2009.
Annals of the New York Academy of Sciences
,
1191
,
62
88
.
Rahman
,
R. A.
,
Sommer
,
W.
, &
Schweinberger
,
S. R.
(
2002
).
Brain-potential evidence for the time course of access to biographical facts and names of familiar persons.
Journal of Experimental Psychology: Learning, Memory, and Cognition
,
28
,
366
373
.
Rahman
,
R. A.
,
van Turennout
,
M.
, &
Levelt
,
W. J.
(
2003
).
Phonological encoding is not contingent on semantic feature retrieval: An electrophysiological study on object naming.
Journal of Experimental Psychology: Learning, Memory, and Cognition
,
29
,
850
860
.
Ralph
,
M. A. L.
,
Ehsan
,
S.
,
Baker
,
G. A.
, &
Rogers
,
T. T.
(
2012
).
Semantic memory is impaired in patients with unilateral anterior temporal lobe resection for temporal lobe epilepsy.
Brain
,
135
,
242
258
.
Rogalsky
,
C.
, &
Hickok
,
G.
(
2009
).
Selective attention to semantic and syntactic features modulates sentence processing networks in anterior temporal cortex.
Cerebral Cortex
,
19
,
786
796
.
Rothwell
,
P. M.
,
Coull
,
A. J.
,
Silver
,
L. E.
,
Fairhead
,
J. F.
,
Giles
,
M. F.
,
Lovelock
,
C. E.
,
et al
(
2005
).
Population-based study of event-rate, incidence, case fatality, and mortality for all acute vascular events in all arterial territories (Oxford Vascular Study).
Lancet
,
366
,
1773
1783
.
Schwartz
,
B. L.
, &
Frazier
,
L. D.
(
2005
).
Tip-of-the-tongue states and aging: Contrasting psycholinguistic and metacognitive perspectives.
Journal of General Psychology
,
132
,
377
391
.
Sekihara
,
K.
,
Nagarajan
,
S. S.
,
Poeppel
,
D.
, &
Marantz
,
A.
(
2004
).
Asymptotic SNR of scalar and vector minimum-variance beamformers for neuromagnetic source reconstruction.
IEEE Transactions on Biomedical Engineering
,
51
,
1726
1734
.
Shafto
,
M. A.
,
Stamatakis
,
E. A.
,
Tam
,
P. P.
, &
Tyler
,
L. K.
(
2010
).
Word retrieval failures in old age: The relationship between structure and function.
Journal of Cognitive Neuroscience
,
22
,
1530
1540
.
Sharp
,
D. J.
,
Scott
,
S. K.
, &
Wise
,
R. J.
(
2004
).
Retrieving meaning after temporal lobe infarction: The role of the basal language area.
Annals of Neurology
,
56
,
836
846
.
Tallon-Baudry
,
C.
, &
Bertrand
,
O.
(
1999
).
Oscillatory gamma activity in humans and its role in object representation.
Trends in Cognitive Sciences
,
3
,
151
162
.
Valentine
,
T.
,
Brennen
,
T.
, &
Bredart
,
S.
(
1996
).
The cognitive psychology of proper names: On the importance of being ernest.
London
:
Routledge
.
Vigliocco
,
G.
,
Antonini
,
T.
, &
Garrett
,
M. F.
(
1997
).
Grammatical gender is on the tip of Italian tongues.
Psychological Science
,
8
,
314
317
.
Visser
,
M.
, &
Lambon Ralph
,
M. A.
(
2011
).
Differential contributions of bilateral ventral anterior temporal lobe and left anterior superior temporal gyrus to semantic processes.
Journal of Cognitive Neuroscience
,
23
,
3121
3131
.