Development of measures to preserve cognitive function or even reverse cognitive decline in the ever-growing elderly population is the focus of many research and commercial efforts. One such measure gaining in popularity is the development of computer-based interventions that “exercise” cognitive functions. Computer-based cognitive training has the potential to be specific and flexible, accommodates feedback, and is highly accessible. As in most budding fields, there are still considerable inconsistencies across methodologies and results, as well as a lack of consensus on a comprehensive assessment protocol. We propose that the success of training-based therapeutics will rely on targeting specific cognitive functions, informed by comprehensive and sensitive batteries that can provide a “fingerprint” of an individual's abilities. Instead of expecting a panacea from training regimens, focused and personalized training interventions that accommodate individual differences should be developed to redress specific patterns of deficits in cognitive rehabilitation, both in healthy aging and in disease.
Over the last century, the fields of medicine and health care have had tremendous success, increasing life expectancy to an average age of approximately 80 years (www.cdc.gov/nchs/fastats/life-expectancy.htm). Although physical health has resulted in prolonged lifespan, preservation of cognitive health has remained a fundamental challenge. Normal aging, as well as disorders associated with the old age (e.g., dementias), have been associated with considerable decline in fundamental cognitive abilities, such as attention, working memory, long-term memory, decision-making, and task switching, to name just a few (e.g., Stark, Stevenson, Wu, Rutledge, & Stark, 2015; Brockmole & Logie, 2013; Chowdhury et al., 2013; Peich, Husain, & Bays, 2013; Stark, Yassa, Lacy, & Stark, 2013; Zanto, Sekuler, Dube, & Gazzaley, 2013; Chowdhury, Guitart-Masip, Bunzeck, Dolan, & Düzel, 2012; Smyth & Shanks, 2011; Zanto et al., 2011; Brockmole, Parra, Della Sala, & Logie, 2008; Gazzaley, Cooney, Rissman, & D'Esposito, 2005). Importantly, cognitive decline impacts the quality of life by adversely affecting activities of daily living such as driving, shopping, or taking medication (Maki et al., 2014; Bárrios et al., 2013; Teng, Tassniyom, & Lu, 2012; Wadley, Okonkwo, Crowe, & Ross-Meadows, 2008) as well as compromising social interactions and relations (e.g., Davies et al., 2010; Frank et al., 2006). Developing approaches to preserve healthy cognitive function and improve the quality of life of the elderly population is therefore a primary focus of research efforts.
In the last decade, interest has been growing for developing computer-based cognitive training interventions that can preserve or even improve cognition both in the medical sector and the technology industry. These provide an alternative to pharmacological treatments and could be used independently or in combination with medication. Cognitive training is characterized by several features, which make it a highly promising lifestyle-based intervention that could dramatically improve the mental life in older adults and in patients. Numerous enterprises are sprouting to develop brain training tasks that exercise cognitive abilities in an analogous fashion to using physical exercises for improving physical fitness and health. This review will take stock of the state of play on the empirical research that supports the development of cognitive training in the elderly population. We will comment on success and limitations and propose key factors that need to be considered for the field to advance.
ADVANTAGES OF COMPUTER-BASED TRAINING
There are a number of key aspects that, if well implemented, can set computer-based cognitive training apart from other medical interventions. Should they prove effective, they have the potential to transform our approach to preservation of cognitive health during normal aging and in disease.
First, computer-based training can be directed to a specific cognitive function and thereby selectively trigger plasticity or changes in efficiency in the specific neural systems that support the trained cognitive function. Although arguably not to the same extent as in younger individuals, the aging brain retains the capacity for plasticity (Li et al., 2008; Craik & Bialystok, 2006; Li, Brehmer, Shing, Werkle-Bergner, & Lindenberger, 2006), and thus, it is possible to drive plasticity and improvements in efficiency in selective neural systems. The natural and inherent ability of cognitive training to target specific neural systems is in contrast to pharmacological medications, which tend to act in a diffuse manner in the brain to influence chemicals involved in widespread neurotransmitter or neuromodulator systems linked to a deficit or disorder. Typically, any given neurotransmitter or neuromodulator is involved in many neural systems and cognitive functions. As a consequence, treatment effects often entail other, unplanned and potentially undesirable side effects. Cognitive interventions therefore naturally provide a superior level of specificity and neural targeting, which pharmacological treatments struggle to achieve.
An example of the neural specific cognitive training comes from a recent study that trained the ability to suppress distraction in older human participants and rats, a function often reported to be impaired in the elderly (Mishra, de Villers-Sidani, Merzenich, & Gazzaley, 2014). The task involved identifying target tones presented infrequently among other distracting tones. In the main distractor suppression training task, distractors became progressively more similar to the target tone in frequency. The task required increasingly refined perceptual discrimination and specifically focused adaptive training in the ability to recognize and ignore distractors. With task progression, the distractor became increasingly more similar to the target stimuli. As a stringent control, a different group of participants was also required to discriminate targets amidst distracting tones in a similar training environment. In contrast to the previous group, the adaptive aspect of the task focused on recognizing and selecting the target stimuli. Over the task, the target became progressively more similar to the distractors. Of these closely matched tasks, only the distractor suppression training was found to redress the intended deficit specifically, as measured by a decrease in proportion of incorrect distractor responses. Behavioral improvement was accompanied by changes at the neural level in both humans and rats. There was diminished neuronal firing to distractors in the rat auditory cortex and attenuated early event-related auditory potentials to distractor tones in humans.
A related benefit of brain training interventions is that they can be personalized to accommodate individual differences. It is possible to tailor a training regimen to target and train specific abilities based on the cognitive profile or “fingerprint” of the participant. This contrasts with pharmacological treatments, in which both the type and dosage of medication are typically based on normative data. Peretz and colleagues (2011) successfully implemented personalized cognitive training using the “CogFit” personal coach. The time spent playing the training tasks for each participant was determined by performance in a battery of baseline cognitive tests. Compared with a group of individuals playing conventional video games (e.g., Tetris, Puzzled, or Snake), personalized cognitive training yielded superior training benefits.
Similarly, one can incorporate immediate quantitative feedback, modifying the task demands as the training evolves to optimize the regimen for each individual. Although in principle it is possible to regulate dosage in pharmacological treatments, obtaining accurate and quantitative feedback to guide dosing can be highly challenging. In other words, cognitive training tasks can change flexibly based on the participant's performance to ensure maximum gain. This homeostatic dynamic adjustment based on continual feedback is referred to as a closed-loop system (Mishra & Gazzaley, 2015). A few studies have indeed shown benefits of closed-loop training tasks that adjust parameters adaptively according to participants' performance (e.g., Mishra, de Villers-Sidani, Merzenich, & Gazzaley, 2014; Anguera et al., 2013; Smith et al., 2009; Mahncke et al., 2006). For example, Anguera and colleagues (2013) trained older participants on a custom-designed adaptive multitasking video game, NeuroRacer, designed to enhance the ability of individuals to multitask effectively in the context of driving while having to respond appropriately to attention-grabbing transient events. The difficulty of the task in the adaptive group was modified after each run according to participant's performance and ensured continuous challenge and high level of engagement and motivation. The results showed a reduction in multitasking costs, as well as enhanced performance in tasks that measured attention and working memory following training. Training benefits were observed only in the multitasking adaptive group compared with a group of individuals performing each of the tasks in isolation. However, it is important to note that, because of the nature of the control task, participants who performed the multitasking procedure had twice as many trials per task as both groups trained for the same amount of hours. Considering this limitation, these findings require replication and extension of training-specific findings.
Aside from the scientific benefits of computer-based cognitive training, there are also practical aspects that make such interventions highly desirable. Cognitive training tasks can be designed to be highly immersive and entertaining. Hence, participants can remain highly motivated to engage regularly in the training regimen. Furthermore, training tasks are highly accessible, delivered on common and portable digital platforms—computers, tablets, smartphones—to an increasingly technology-savvy population of elderly individuals. When it comes to the elderly, this allows for cognitive interventions that can reach individuals in their homes or on the go with minimal disruption to their daily lives. Lastly, computer-based training is inexpensive, especially when compared with long-term pharmaceutical treatments.
IS COMPUTER-BASED COGNITIVE TRAINING EFFECTIVE?
There are a number of studies pointing to the effectiveness of computer-based training interventions in the elderly. Studies commonly use existing video games that are not originally designed to improve cognition (e.g., Medal of Honor, Tetris, or Pac Man), a combination of cognitive tasks taken from commercial packages (e.g., Nintendo Brain training or Brain Age; e.g., Clemenson & Stark, 2015; Bozoki, Radovanovic, Winn, Heeter, & Anthony, 2013; Maillot, Perrot, & Hartley, 2012; McDougall & House, 2012; Nouchi et al., 2012; Stern et al., 2011; Cassavaugh & Kramer, 2009; Basak, Boot, Voss, & Kramer, 2008), or tailored games designed specifically for training such as the NeuroRacer (e.g., Mishra et al., 2014; Anguera et al., 2013). Training regimens vary widely in terms of the choice of tasks, as well as in the duration and schedule of training, the nature of control groups, and the pre- and posttraining assessment batteries. Furthermore, different studies measure the same cognitive function in their assessment battery (e.g., working memory) using different tasks. Together, these differences complicate the assessment of the effectiveness of cognitive training. Nevertheless, after controlling for some of these differences, a meta-analysis of 20 training studies in the older participants revealed that training older adults with computer-based tasks does improve several cognitive functions that decline with aging, such as memory, attention, RT, and executive function especially in the older age range (Toril, Reales, & Ballesteros, 2014).
The effectiveness of cognitive training is usually assessed by looking at changes in performance in untrained cognitive tasks, otherwise known as “transfer.” Transfer can occur to tasks that are close in function and tap into the same underlying cognitive processes as in the training, known as “near transfer.” For example, if participants were trained on an n-back working memory task, another type of working memory task such as operation span would be an example of near transfer. Alternatively, training can result in generalized improvement on tasks bearing little overlap to the task being trained, that is, “far transfer.” Examples of far transfer would be working memory training resulting in improvements on a language or reasoning task (for more information on the concept of transfer, please refer to Klahr & Chen, 2011; Zelinski, 2009).
There is some evidence supporting near transfer following cognitive interventions in older participants. A recent meta-analysis on the effectiveness of working memory and executive function training reported a small to medium effect size for near transfer, that is, transfer to other working memory and executive function measures (Karbach & Verhaeghen, 2014; Melby-Lervåg & Hulme, 2013). Similarly, Baniqued et al. (2014) found that training on video games that place high demands on attentional abilities (e.g., Sushi Go Round) for 15 hr resulted in enhanced divided attention compared with playing other forms of video games. RT improvements have also been reported following training in tasks that require fast pace responses (Ballesteros et al., 2014; Edwards et al., 2005; Goldstein et al., 1997).
The evidence for far transfer, however, is much more scarce with, at best, small effect sizes (Karbach & Verhaeghen, 2014). In fact evidence for far transfer in older participants is so rare that some have reported them as “almost entirely absent” (Noack, Lövdén, Schmiedek, & Lindenberger, 2009). One of the few studies reporting evidence for far transfer comes from the NeuroRacer game training multitasking, resulting in improved RTs in a sustained attention task as well as improved performance in a working memory task with or without distraction (Anguera et al., 2013).
But why is there such limited evidence of far transfer? Theoretically, far transfer refers to improvement in tasks that have no overlap with the trained cognitive function. Considering that cognitive training drives plasticity in neural systems that support a specific function, it would not be expected that cognitive functions that rely primarily on other neural systems should benefit. If we take physical exercise as a simple analogy, muscles that are not involved in a specific exercise will not benefit from that specific activity. Thus, measuring the effectiveness of training in terms of far transfer may not be theoretically sensible.
Furthermore, what gets labeled as far transfer in any given study can often be based on arbitrary or crude criteria. For example, one may label a long-term memory task as a far transfer following attention training, assuming that these two cognitive functions have very little in common in terms of both function and neural substrate. However, as our understanding of the neural mechanisms supporting various cognitive functions evolves, the boundaries between tasks do not mirror a similar distinction between their neuronal underpinnings. For example, training in the NeuroRacer resulted in an increase in midline frontal theta power that has also been implicated in other cognitive functions and may reflect a general change in processes related to cognitive control that can play a role within many task contexts (e.g., Mitchell, McNaughton, Flanagan, & Kirk, 2008). Therefore, what a few studies have labeled as far transfer may reflect some functional and neural overlap to the training regimen.
Instead of focusing on the generality of training regimens, measured through far transfer, we propose that the effectiveness of training in the future should instead highlight the specificity. Training interventions have the potential to accommodate individual differences in baseline cognitive abilities; one can optimally target tasks that require improvement based on an individual's cognitive fingerprint. In fact, few studies have shown that cognitive training may be more effective in individuals with low baseline ability (e.g., Whitlock, McLaughlin, & Allaire, 2012). Flexibility in personalized training interventions is thus the future of cognitive rehabilitation, both in healthy aging and in disease.
In addition, it may be possible to enhance the effectiveness of training by considering other factors that may interact with training such as personality traits or motivation levels. For example, it has been shown in younger participants that those who believe in malleability of intelligence and cognition show greater improvement following training than those who do not (Jaeggi, Buschkuehl, Shah, & Jonides, 2014). Although not directly related to motivation, such beliefs can influence the motivation levels of participants when undergoing cognitive training. Furthermore, there are also few studies examining the effect of different personality traits on cognitive training. For example, openness to experience has been positively related to increased training gains (Double & Birney, 2016), with less consistent findings for other personality traits (Thompson et al., 2013; Studer-Luethi, Jaeggi, Buschkuehl, & Perrig, 2012). Future studies should thus not only focus on baseline individual cognitive abilities but also examine the effects of other factors that have been shown, to some extent, to interact with training gains. This should also include studies that combine noninvasive neuroimaging techniques, such as fMRI, which have shown changes in resting-state functional connectivity in various neural networks following training (e.g., Cao et al., 2016; Strenziok et al., 2014; Kirchhoff, Anderson, Barch, & Jacoby, 2012). Baseline differences in connectivity and anatomy may also contribute to both the effectiveness and the magnitude of cognitive training gain.
Most published studies suggest some positive improvements after training. However, the progress in our understanding of brain training tools has been constrained by the lack of a standardized methodological protocol. Inconsistent methodologies employed by different studies make comparison of different training tasks challenging and preventing us from identifying the most reliably effective components of a training regimen. There are a few methodological inconsistencies, explained below, which, once addressed, will enhance future advancement of this field.
The addition of well-matched control groups is an essential methodological component of training studies, because most cognitive training studies involve between-group designs. They are used as a benchmark to measure the effectiveness of the training and allow us to isolate and interpret the causes of any improvement by the intervention. The importance of a well-matched control group becomes more significant because it has been shown that cognitive training can be achieved through a placebo effect, using only suggestive flyers, in young individuals (Foroughi, Monfort, Paczynski, McKnight, & Greenwood, 2016). Unfortunately, however, many studies in the elderly have not employed well-matched control groups, hindering interpretation of their findings. A well-matched control group should minimally match the experimental training group in demographics, expectations about the treatment, motivation levels, time of engagement, challenge, and level of progression.
On one extreme, a few studies have neglected to include any control group (e.g., Ackerman, Kanfer, & Calderwood, 2010; Cassavaugh & Kramer, 2009). The absence of a control group in these studies means it is impossible to determine whether the observed effects are specific to the training regimen, a product of test–retest effects on the outcome measures or results of motivational or social factors to name just a few.
To control for test–retest effects, many studies have employed a passive or no-contact control group (e.g., Belchior et al., 2013; Sosa, 2012; see Boot, Simons, Stothart, & Stutts, 2013, for discussion). This involves a group of participants who perform the pre- and postassessment battery, but who do not participate in any training nor have any contact with the experimenter. Although these designs control for practice effects on the assessment battery, the level of social interaction with the investigators is not matched between the groups. This is problematic, specifically because social interactions have been shown to have protective effect against age-related cognitive decline (Glei et al., 2005; Zunzunegui, Alvarado, Del Ser, & Otero, 2003). Moreover, any change in cognitive abilities following training in this design does not control the Hawthorne effect—the change in behavior due to participants' awareness of being observed (Noland, 1959; Roethlisberger, Dickson, Wright, & Pforzheimer, 1939). Furthermore, differences in motivational levels may be at play, and it is well recognized that heightened motivation can significantly enhance performance in multiple domains (e.g., Pessoa, 2009).
More recently, the importance of employing active control groups has been recognized. Individuals in active control groups undergo a training regimen that is well matched to the experimental training intervention in as many features as possible, for example, in duration, intensity, and frequency of training sessions, as well as interaction with researchers, except for the critical factor being manipulated. Although more studies are using active control groups, they often are not well matched to the training regimen. For example, Ballesteros et al. (2014) compared performance between a group of individuals trained on “Lumosity” brain training tasks against an active control group who met in the lab to discuss topics of general interest. The computer-based training regimen resulted in improved attention, memory, and processing speed compared with the control group—not surprising considering that the control group engaged in activities unrelated to these cognitive processes. One promising approach is to assess the effects of adaptive cognitive training against performance of individuals training on a nonadaptive version of the same training task.
Another, possibly optimal approach is to use closely related tasks with adaptive elements that specifically target different cognitive functions (A and B). The “training” and “control” groups become relative to one another. They should share training environments that are maximally similar. The efficacy of training can subsequently be verified using a double dissociation methodology, wherein participants in which the adaptive element of the task focused on function A should show transfer to tasks that share function A, but not function B. In contrast, participants with adaptive training in B should show transfer on tasks that share function B, but not A. The study by Mishra et al. (2014) approximated this approach. They targeted either distractor or target discrimination by focusing the adaptive aspects of the training to challenge each process independently. In such a design, each training regimen acts as an active control group for the other group while controlling for all the desired factors mentioned.
Standardized Assessment Battery
A comprehensive and standardized sensitive battery of pre- and posttraining tasks is another essential methodological component of training studies. So far, a wide and unsystematic range of different cognitive and neuropsychological tasks has been used to assess the efficiency of training tasks. Some studies have adopted standardized neuropsychological tests, which are often used to diagnose impairments in patient populations (e.g., Lee et al., 2013; Smith et al., 2009). Other studies have used a combination of publicly available assessment batteries including the CogState (Bozoki et al., 2013), NexAde (Peretz et al., 2011) or elements of the WAIS (McDougall & House, 2012; Drew & Waters, 1986). Still others have quantified the effectiveness of training according to the functional outcomes measures of daily living, such as the Everyday Problems Test, Activities of Daily Living (Willis et al., 2006; Ball et al., 2002), and Tests of Everyday Coping and Independent Living (Oswald, Gunzelmann, Rupprecht, & Hagen, 2006).
These types of tasks have different degrees of sensitivity and specificity. Some may be too blunt to detect changes in healthy populations (Elkana et al., 2015). Standard neuropsychological tasks and tasks of daily living also often combine many cognitive functions (Wasserman & Wasserman, 2012), making it hard to pinpoint the specific pattern of functions that have been modified. Moreover, the number and variety of tests used assessment batteries that differ dramatically. At one extreme, some studies use a very limited set of tests and thus may miss training effects that are actually present. On the other hand, some studies evaluate training using large number of tasks, increasing the probability of a false positive. Instead, the assessment battery should include sensitive tasks that are specific to the trained cognitive functions as well as tasks that are hypothesized to have minimal overlap with the trained cognitive functions. Developing a core, common comprehensive, and sensitive cognitive battery for assessing cognitive training in different studies would be of great utility.
In addition to scientific methodological limitations, a few other practical factors can undermine the effective application of brain training. One such factor is low compliance. Training studies in the elderly have been plagued with issues of poor adherence to training schedules. For example, in a large-scale study by Owen and colleagues (2010), only 11,430 out of the initial 52,617 participants completed a minimum of two 10-min training sessions during a 6-week training period. To ensure high compliance, future studies should also focus and expand on quality of the tasks, creating more engaging and easily accessible training regimens. For example, although action-based and first-person shooter games have yielded promising findings of training in various cognitive functions in younger participants (e.g., Bejjanki et al., 2014; Green & Bavelier, 2012), 10 of 16 elderly individuals reported that they did not wish to engage in such a training task and 50% of those that did claimed that they did not enjoy the experience (McKay & Maki, 2010). This highlights the importance that training regimens should be tailored to the preferences of older individuals. For example, a nonviolent, strategy video game such as Rise of Nations has been employed successfully in training in older individuals (Basak et al., 2008).
Moreover, one other factor that may improve compliance is employing personalized training regimens, targeting an individual's cognitive ability that requires improvement. This will possibly motivate participants to engage in training, as it will be beneficial to each participant, especially in studies that adapt a double dissociation methodology (see Control Groups section).
Although these are possible solutions to tackle the compliance problem, in some cases large sample sizes are required to examine, for example, the effect of personality traits on effectiveness of training. To tackle such questions and overcome poor adherence to training schedules, studies will benefit from collaborations with the technology industry, which will help with both access to large number of willing participants and designing engaging and accessible training tasks.
Upgrading Cognitive Training
Going forward, the field of cognitive training should focus first on addressing the aforementioned methodological shortcomings. This includes the addition of well-controlled control groups and training procedures as well as the development of a core assessment battery. The assessment battery should involve sensitive and specific tasks that tap into both related and unrelated functions to the training, going beyond what is afforded by standard neuropsychological tasks. Developing such a battery will require the development of novel measures and benchmarking against standardized tests. Furthermore, such a battery should ideally contain separate subgroups, each targeting a group of closely related cognitive functions, similar to commonly used neuropsychological measures (e.g., Addenbrooke's Cognitive Assessment, ACE). This will allow researchers to select tasks that are closely related to the question at hand, minimizing the negative effects of fatigue and lack of motivation that can follow long assessment sessions. Moreover, gamification of cognitive tests will appeal to the ever-growing technology-savvy older population, allowing for a more accessible and enjoyable cognitive fingerprinting of participants. Although there have been many attempts at creating such battery (e.g., CANTAB or UFOV), the commercialized nature of such attempts limits the accessibility and flexibility of these batteries.
Moreover, alongside in-depth cognitive assessment, measures of daily living should also be included, ideally in conjunction with low-friction measures of cognitive performance in real life. These measures will become increasingly available as scientific research forges links with the digital technology industry to use wearable devices to gain performance measures on important and relevant variables such as mobility and the extent of social interactions, as well as physiological measures, such as heart rate variability and sleep quality.
The nature of the cognitive training tasks should also be considered carefully. Often, tasks that have been used successfully to investigate the psychological and neural mechanisms of cognition are adapted or deployed for cognitive training. These, however, rarely provide effective tools for training, resulting in weak transfer effects at best. This is the case if single tasks have been used in isolation or if multiple tasks have been used in parallel (Ballesteros et al., 2014; Boot et al., 2013; Bozoki et al., 2013; McDougall & House, 2012; van Muijden, Band, & Hommel, 2012; Peretz et al., 2011; Dahlin, Neely, Larsson, Bäckman, & Nyberg, 2008).
An alternative approach has been to use training tasks that employ and coordinate multiple cognitive functions simultaneously. These tasks tend to come from more immersive gaming contexts rather than experimental task designs. This was demonstrated in one study that trained participants in a highly multifactorial game, which taxed divided attention, executive function, and, importantly, their effective coordination (Basak et al., 2008). This study and others employing similar training tasks that combine many cognitive domains such as navigation in a rich virtual reality environment, tracking of multiple objects, attention-switching or goal-directed plans provide effective training (e.g., Belchior et al., 2013; Basak et al., 2008). Although such training regimens would not be easily adaptable to accommodate individual differences, they have been shown to demonstrate consistent training benefits. The secret to the success of these trainings may rely not only on their feature of taxing multiple cognitive functions but also on the specific type of functions that adapt with successful task performance, namely attention and cognitive control.
Cognitive functions related to attention and other aspects of cognitive control are not unitary, isolated domains but rather support and interact heavily with other cognitive domains (e.g., Mok, Myers, Wallis, & Nobre, 2016; Kuo, Stokes, & Nobre, 2012; Rohenkohl, Cravo, Wyart, & Nobre, 2012; Stokes, Atherton, Patai, & Nobre, 2012). As such, they can be thought of as providing an important scaffold for organizing and optimizing cognition across the board. Hence, a training regimen that focuses on these functions will not only have a specific effect on these functions alone but also have an impact on many other cognitive tasks.
Therefore, a complementary approach to training single cognitive functions in isolation is to train these scaffolding functions, such as sustaining and focusing attention, resisting distraction, and task switching. Taking this even further, training could also target the cooperation between cognitive functions to allow for a synergistic enhancement of cognition. Thus, to bolster a specific function, a training task should first target that specific function, through engaging, motivating, dynamic, and adaptive training. In complement, and equally importantly, it should promote cooperation of that function with other domains. For example, working memory training should not only focus on increasing its capacity to encode and maintain greater number of items. It should also train the ability to protect memory representations from distractors and to prioritize flexibly the items that are in memory.
In addition, it may be possible to boost the effects of cognitive training by combining it with physical training (see Hötting & Röder, 2013, for a review). Physical exercise influences synaptic plasticity, cell proliferation, and vascularization and is related to gray and white matter volumes in older adults (e.g., Sexton et al., 2016; Erickson, Gildengers, & Butters, 2013). Furthermore, a meta-analysis of 18 physical training studies with older adults has revealed that physical fitness has a “robust” effect on cognition in the elderly (Colcombe & Kramer, 2003). Therefore, physical and cognitive training may interact so that together they are more beneficial than the added effects of each type of training in isolation. A flurry of studies have started investigating whether combined cognitive and physical training confers an additional advantage to cognitive functions in the elderly (e.g., Zheng et al., 2015; Shah et al., 2014; Maillot et al., 2012). However, only a few have directly compared combined cognitive-physical training to cognitive-only training interventions in the elderly (e.g., Oswald et al., 2006; Rahe et al., 2015; Smiley-Oyen, Lowry, Francois, Kohut, & Ekkekakis, 2008). With the availability of recent technologies such as Nintendo Wii and Xbox Kinect (home-based gaming consoles), future studies can more easily test the effectiveness of cognitive and physical training.
As our understanding of the mechanisms of cognitive training grows and our methodology to apply and assess interventions is refined, it will become possible to apply tailored cognitive training interventions in patient populations. Although many studies have examined the effect of training in patients with, for example, mild cognitive impairment (e.g., Vermeij et al., 2016; Reijnders, van Heugten, & van Boxtel, 2013; Gates, Sachdev, Fiatarone Singh, & Valenzuela, 2011; Li et al., 2011), brain injury (e.g., Dymowski, Ponsford, & Willmott, 2016; Lindeløv et al., 2016), stroke (Wentink et al., 2016; Gamito et al., 2015), or Alzheimer's disease (Kim et al., 2016; Sitzer, Twamley, & Jeste, 2006), very few were able to demonstrate clear training benefits. At present, the methodological limitations that plague training tasks in studies of healthy elderly volunteers also extend to studies involving patient population. Often problems are further exacerbated with regard to the choice of training tasks, because not all tasks used for the healthy elderly individuals can be performed by individuals with clinical symptoms, such as motor problems in the case of Parkinson's disease or severe memory problems in the case of Alzheimer's disease.
Computer-based training is still a young and emerging field. Unsurprisingly, there are many lessons to be learned before the field matures. The importance and cost posed by cognitive decline in the elderly; the enormous impact that a simple lifestyle-based intervention can have on the lives of individuals as well as on the burden to society; and the inherent features that enable cognitive training to be readily accessible, targeted, personalized, and flexible all argue for continued investment in developing this line of research. The enterprise will benefit significantly from building collaborative partnerships between academic research and the information and technology industry to enable rapid innovation of platforms and applications that are accessible to and engage the general public. We are optimistic that methodological and conceptual breakthroughs on the horizon will enable cognitive training to realize its full potential to transform cognitive rehabilitation in both normal and abnormal aging.
This work was supported by the Wellcome Trust (A. C. N., 104571/Z/14/Z), the British Academy (N. Z., pf150057), and the National Institute for Health Research (NIHR) Oxford Biomedical Research Centre based at Oxford University Hospitals NHS Foundation Trust Oxford University.
Reprint requests should be sent to Nahid Zokaei, Oxford Centre for Human Brain Activity, Department of Psychiatry, University of Oxford, Oxford, OX3 7JX, United Kingdom, or via e-mail: firstname.lastname@example.org.