The fields of neuroscience and psychology are currently in the midst of a so-called reproducibility crisis, with growing concerns regarding a history of weak effect sizes and low statistical power in much of the research published in these fields over the last few decades. Whilst the traditional approach for addressing this criticism has been to increase participant sample sizes, there are many research contexts in which the number of trials per participant may be of equal importance. The present study aimed to compare the relative importance of participants and trials in the detection of phase-dependent phenomena, which are measured across a range of neuroscientific contexts (e.g., neural oscillations, non-invasive brain stimulation). This was achievable within a simulated environment where one can manipulate the strength of this phase dependency in two types of outcome variables: one with normally distributed residuals (idealistic) and one comparable with motor-evoked potentials (an MEP-like variable). We compared the statistical power across thousands of experiments with the same number of sessions per experiment but with different proportions of participants and number of sessions per participant (30 participants × 1 session, 15 participants × 2 sessions, and 10 participants × 3 sessions), with the trials being pooled across sessions for each participant. These simulations were performed for both outcome variables (idealistic and MEP-like) and four different effect sizes (0.075—“weak,” 0.1—“moderate,” 0.125—“strong,” 0.15—“very strong”), as well as separate control scenarios with no true effect. Across all scenarios with (true) discoverable effects, and for both outcome types, there was a statistical benefit for experiments maximising the number of trials rather than the number of participants (i.e., it was always beneficial to recruit fewer participants but have them complete more trials). These findings emphasise the importance of obtaining sufficient individual-level data rather than simply increasing number of participants.

Neuroscience and psychology studies often yield statistical powers well below the typically desired level of 80%, with one review reporting a range of only 8–30% (Button et al., 2013). This has led to concern regarding the reproducibility of findings reported in these fields, the most damning of which comes from the Open Science Collaboration (Open Science Collaboration, 2015), which involved 100 replications of published psychology experiments and found only 36% of those replications were successful. Whilst there are many factors that can influence replicability, in terms of both experimental design and statistical analysis, one of the most commonly raised concerns is the small sample sizes that have traditionally been employed by many studies in these fields (Turner et al., 2018). Conventional considerations for maximising statistical power focus on increasing the participant sample size, since recruiting a greater number of participants will invariably aide the detection of an experimental effect (Minarik et al., 2016; Mitra et al., 2019). In many research contexts, however, there may be value in considering the number of trials within an experimental paradigm (Baker et al., 2021; Normand, 2016; Smith & Little, 2018; Zoefel et al., 2019).

The relative importance of each of these factors for any given experiment is dependent on the amount of variability that is expected to be present both within and between participants (Baker et al., 2021; Grice et al., 2017; Rouder & Haaf, 2018; Xu et al., 2018). For example, if a lot of variability is expected across trials for a given individual (i.e., when the measured variable is dynamic/unstable), there is an obvious value in ensuring a sufficient amount of data are collected from each individual to ensure that the individual-level measures are accurate, rather than simply recruiting more individuals. In this scenario, it is important to remember that even with a sufficiently large participant sample size, the group-level analyses are unlikely to be meaningful if the measures at the individual level are inaccurate (Normand, 2016). However, if there is little variability expected across trials (i.e., when the measured variable is static/stable), then there is less value in collecting large number of trials for each participant, with resources instead allocated towards increasing the participant sample size.

There are many branches of neuroscience research that are particularly prone to considerable within- and between-subjects variability, one example of which is the field of non-invasive brain stimulation (NIBS). NIBS methods make promising tools for both studying and modulating brain function (Begemann et al., 2020; de Boer et al., 2021; Lewis et al., 2016; Vosskuhl et al., 2018) that have received increasing interest in recent years for their potential uses in both psychiatry (Elyamany et al., 2021; Piccoli et al., 2022; Vicario et al., 2019) and neurorehabilitation (Evancho et al., 2023; Qi et al., 2023; Yang et al., 2024). Unfortunately, however, studies involving NIBS have traditionally employed relatively small sample sizes (Button et al., 2013; Minarik et al., 2016; Mitra et al., 2019), which when combined with the aforementioned within- and between-subjects variability can lead to suboptimal analyses, inconsistent findings across studies, and ultimately criticism regarding the efficacy of these stimulation techniques (Lafon et al., 2017; Vöröslakos et al., 2018).

Most of this criticism has historically been directed towards low participant sample sizes (Mitra et al., 2019); however, as mentioned earlier, it has recently been suggested that in many cases, the trial sample size may be of equal, if not greater importance (Baker et al., 2021; Normand, 2016; Rouder & Haaf, 2018; Smith & Little, 2018; Xu et al., 2018). For example, a recent simulation study by Zoefel et al. (2019) found that the most important of their experimental parameters for detecting phasic modulation of brain activity was indeed the number of trials per participant and they, therefore, suggested that future studies should employ experimental designs with a relatively high number of trials. The authors also found that the most important of their neural parameters for detecting phasic effects was the hypothesised effect size. Therefore, optimising the trial sample size would be of particular importance for studies investigating subtle (weak) phasic effects.

This brings us to transcranial alternating current stimulation (tACS), which is a form of NIBS that involves the application of a weak alternating electric current across the scalp at the same frequency as a particular neural oscillation in order to influence the underlying oscillatory activity (Bland & Sale, 2019). Converging evidence suggests that tACS can effectively modulate oscillatory brain activity in a frequency- and phase-dependent manner by entraining endogenous oscillations to match the frequency and phase of the exogenous stimulation (for review, see Wischnewski et al. (2023)). However, because tACS only probabilistically influences the spike timing of neuronal populations rather than directly causing those neurons to depolarise (Elyamany et al., 2021; Huang et al., 2017; Opitz et al., 2016; Vosskuhl et al., 2018), this phasic entrainment is often reported to be relatively weak (Gundlach et al., 2016; Neuling et al., 2012; Riecke, Formisano, et al., 2015; Riecke et al., 2018; Riecke, Sack, et al., 2015; Riecke & Zoefel, 2018; Wilsch et al., 2018; Zoefel et al., 2018).

One approach for investigating phasic entrainment by tACS is to collect transcranial magnetic stimulation (TMS)-induced motor-evoked potentials (MEPs) at different phases of tACS, referred to as phase-dependent TMS (Fehér et al., 2017, 2022; Nakazono et al., 2021; Raco et al., 2016; Schaworonkow et al., 2018, 2019; Schilberg et al., 2018; Zrenner et al., 2018, 2022). This allows corticospinal excitability to be probed across tACS phase without the interference of tACS artefacts, which are a major contaminant of neurophysiological recordings from electro-/magnetoencephalography (Kasten & Herrmann, 2019; Noury et al., 2016; Noury & Siegel, 2017). This technical advantage does come at a cost, however, as MEP amplitudes often exhibit considerable within-participant variability across trials (Capaday, 2021; Janssens & Sack, 2021) due to a complex combination of physiological (Gandevia & Rothwell, 1987; Hashimoto & Rothwell, 1999; Niyazov et al., 2005; Vidaurre et al., 2017; Zalesky et al., 2014) and experimental factors (Grey & van de Ruit, 2017). The combination of small effect sizes for tACS and high inter-trial variability for TMS, therefore, means that the MEP sample size needs to be sufficiently large in order to detect any modulation of the MEP amplitudes with respect to tACS phase. Crucially, however, the maximum number of MEPs that can be obtained in a single experiment session is limited by several practical limitations, such as the session length, charge time between TMS pulses, and the gradual build-up of the TMS device’s temperature (which can eventually cause the device to overheat).

In a recent human study using slow-wave tACS (Geffen et al., 2021), we acquired a total of 240 MEPs within the tACS protocol of each session. Since these MEPs were acquired across 4 different epochs of the tACS protocol (early online, late online, early offline, and late offline), this gave us only 60 MEPs per epoch, per participant. Despite employing a participant sample size of 30, which is nearly 50% greater than the mean participant sample size from a recent meta-analysis of NIBS studies (~22; Mitra et al., 2019), the trial sample size could be considered low (e.g., the lowest number of trials was 192 in Zoefel et al.’s simulations in 2019). This low trial sample size may ultimately limit the interpretation of these results since it cannot be confirmed whether the lack of phasic effects was due to insufficient statistical power. Might these effects have been detectable had we instead recruited just 10 participants and had them return for three sessions?

One possible solution to this limit in MEP acquisition is to perform multiple sessions for each participant then pooling the trial data across sessions. However, given that most studies have resource and/or time constraints that limit their experimental hours, this approach would come at the cost of participant sample size. Although the simulations performed by Zoefel et al. (2019) have demonstrated that trial sample size is a crucial experimental parameter for detecting phasic effects, those simulations kept participant sample size constant for all experiments. The relationship between participant and trial sample size for detecting phasic effects thus remains to be established and it is unclear whether the gain of statistical power from increasing the trial sample size would outweigh the loss of statistical power from decreasing the participant sample size. Therefore, we chose to perform a simulation study to quantify the statistical powers of experiments that have the same number of total sessions per experiment but with different proportions of participants and number of sessions per participant. In this manner, the theoretical burden to the researcher (in terms of experimental hours) is matched in each scenario: a greater number of individuals contribute fewer experimental sessions or fewer individuals contribute multiple sessions (i.e., a greater number of trials for a given participant pooled across sessions).

2.1 Simulation scenarios

All simulations were performed in MATLAB (R2020b). For each scenario, we defined the type of data (“idealistic” or “MEP-like”), mean effect size (0.075—“weak,” 0.1—“moderate,” 0.125—“strong,” or 0.15—“very strong”), number of participants (30, 15, or 10), sessions per participant (1, 2, or 3 sessions for 30, 15, and 10 participants, respectively), trials per session (60), number of experiments (1000), and the relative degree of between- and within-subjects variability (low, medium, or high).

The total number of sessions and the trials per session were chosen based on the participant and trial sample sizes used in our recent slow-wave tACS study (see above; Geffen et al., 2021). The effect sizes do not reflect a traditional effect size value such as a Cohen’s d value, but rather determine either the amplitude of the base sine function (in the case of the idealistic data) or the variance of a normal distribution around the base sine function (in the case of the MEP-like data). The mean values for each effect size category were chosen from pilot testing to provide sinusoidal data with a good spread of statistical power: from weak effects (e.g., 20% power) to strong effects (e.g., 90% power). As a negative control, we performed a separate test with no effect of phase for any of the sessions.

2.2 Determining effect sizes for each session

For each scenario, separate effect sizes were first generated for each participant that ranged around the mean effect size for the chosen effect size category (i.e., 0.075—“weak,” 0.1—“moderate,” 0.125—“strong,” or 0.15—“very strong”). The range for the participant effect sizes was set to either ±20%, 60%, or 100% around this mean value for low, medium, and high between-subjects variability, respectively. The effect sizes for each participant were then jittered slightly between their individual sessions by either ±10%, 20%, or 30% around the value from their first session for low, medium, and high within-subjects variability, respectively.

2.3 Generating sinusoidal data

The effect sizes for each session were then used to generate sinusoidal data with continuous (i.e., randomly sampled) phase values (see Fig. 1). For the idealistic data (Fig. 1A), the effect size value scales the sine function to peak at the specified amplitude, with the data points being normally distributed around the base sine function. For the MEP-like data (Fig. 1B), however, the effect size scales the variance of a normal distribution around the base sine function rather than scaling the amplitude of the sine function itself, since the MEP-like data explicitly violate the assumptions of homoscedasticity and normality. The distribution for the MEP-like data is then folded to create a positive skew. Like MEP values, these data points, therefore, could not be negative and instead became more variable and positive around the “peak” of tACS.

Fig. 1.

Examples of Simulated Idealistic (A) and MEP-like (B) data with Fitted Sinusoidal Models for the No Effect (Top), Weak Effect (Middle), and Very Strong Effect (Bottom) Scenarios. Individual data points are shown in black, whilst fitted sinusoidal models are shown in red. The idealistic data are normally distributed around the base sine function, with the effect size scaling the sine function to peak at the specified amplitude. The MEP-like data are not normally distributed around the base sine function, and so the effect size scales the variance of a normal distribution around the base sine function rather than scaling the base sine function itself.

Fig. 1.

Examples of Simulated Idealistic (A) and MEP-like (B) data with Fitted Sinusoidal Models for the No Effect (Top), Weak Effect (Middle), and Very Strong Effect (Bottom) Scenarios. Individual data points are shown in black, whilst fitted sinusoidal models are shown in red. The idealistic data are normally distributed around the base sine function, with the effect size scaling the sine function to peak at the specified amplitude. The MEP-like data are not normally distributed around the base sine function, and so the effect size scales the variance of a normal distribution around the base sine function rather than scaling the base sine function itself.

Close modal

2.4 Analysing the simulated data

The idealistic/MEP-like data and the phase values were then pooled across the individual sessions for each participant and a linear regression permutation analysis was performed to calculate the individual p-values for each participant (Bland & Sale, 2019; Zoefel et al., 2019). The linear regression approach outlined in Bland and Sale (2019) was found to be the most sensitive when paired with permutation analysis (Zoefel et al., 2019). Baker et al. (2021) also found that many paradigms where the dependent variable is derived by model fitting show continual improvements in power as the number of trials increases, rather than reaching an asymptote where further trials provide no additional improvements in statistical power, as was the case for other paradigms where the dependent variable was derived by some other means. Furthermore, by model fitting at the individual level, the individual becomes the replication unit instead of the group (i.e., each participant can be viewed as an independent replication of the experiment; Smith & Little, 2018).

For this linear regression analysis, an ideal (best-fitting) sinusoidal model is first fitted to each simulated participant’s data points based on their corresponding phase value as described in Bland and Sale (2019). The data points are then shuffled with respect to their phases for a total of 10000 permutations per participant and new sinusoidal models are fitted to the shuffled data. The true and shuffled sinusoidal model amplitudes are then compared, with the individual p-values representing the proportion of shuffled model amplitudes that exceeded the true model amplitudes for each participant. Because the permutation procedure disrupts any phasic effects that may be present, the shuffled model amplitudes should be small (i.e., closer to zero) and thus, the shuffled data act as a negative control for the “true” data. The group p-value for the experiment was then obtained by combining the individual p-values using Fisher’s method (Fisher, 1992; Zoefel et al., 2019).

2.5 Calculating statistical power for each scenario

This process was repeated for a total of 1000 experiments per scenario, and the statistical power for the scenario was calculated by dividing the number of experiments with a significant group p-value (i.e., p < .05) by the total number of experiments. For each combination of effect size and data type, the powers were averaged across the different degrees of between-/within-subjects variability to form the power values for the primary analyses, whilst the complete power values before averaging have been included as Supplementary Material.

2.6 Comparing statistical powers between scenarios

To determine whether the proportion of participants vs. trials significantly influenced the predicted powers, chi-square goodness-of-fit tests were performed for each combination of data type (idealistic or MEP like) and effect size (no effect, weak, moderate, strong, or very strong).

To determine whether any combination of parameters was susceptible to over- or under-sensitivity (i.e., if any of the powers for the no effect condition were significantly greater or lower than the conventional expected false discovery rate of .05), a binomial test was performed to establish the minimum and maximum thresholds for false-positive experiments in the no effect simulations.

The power values for each scenario (i.e., the proportion of significant experiments for each combination of effect size and number of participants/trials) are summarised in Table 1 and Figure 2. The different proportions of number of participants and trials (i.e., 30 participants × 1 session, 15 participants × 2 sessions, and 10 participants × 3 sessions) were then compared using chi-square goodness-of-fit tests (separately for each effect size and data type), which revealed significant improvements in statistical power as the number of trials increases (i.e., beyond corresponding decreases in number of participants). These improvements in power were present across all effect sizes, except for the control simulations with no effect, for both the idealistic (χ2 = 0.94, 23.57, 51.76, 73.88, and 54.18 for no effect, weak, moderate, strong, and very strong, respectively; p = .624 for no effect and p < .001 for all other effect sizes) and MEP-like data (χ2 = 0.05, 62.25, 86.53, 42.81, and 11.58, respectively; p = .974 for no effect, p < .001 for weak, moderate, and strong effects, and p = .003 for very strong effect).

Table 1.

Mean estimated statistical powers for linear regression analyses using simulated idealistic or MEP-like sinusoidal data.

IdealisticEffect size
No effectWeak (0.075)Moderate (0.1)Strong (0.125)Very strong (0.15)
30 participants × 1 session .048 .13 .225 .358 .541 
15 participants × 2 sessions .053 .176 .338 .542 .731 
10 participants × 3 sessions .058 .221 .406 .626 .807 
IdealisticEffect size
No effectWeak (0.075)Moderate (0.1)Strong (0.125)Very strong (0.15)
30 participants × 1 session .048 .13 .225 .358 .541 
15 participants × 2 sessions .053 .176 .338 .542 .731 
10 participants × 3 sessions .058 .221 .406 .626 .807 
MEP-likeEffect size
No effectWeak (0.075)Moderate (0.1)Strong (0.125)Very strong (0.15)
30 participants × 1 session .049 .196 .381 .634 .836 
15 participants × 2 sessions .051 .317 .587 .823 .948 
10 participants × 3 sessions .049 .387 .683 .881 .973 
MEP-likeEffect size
No effectWeak (0.075)Moderate (0.1)Strong (0.125)Very strong (0.15)
30 participants × 1 session .049 .196 .381 .634 .836 
15 participants × 2 sessions .051 .317 .587 .823 .948 
10 participants × 3 sessions .049 .387 .683 .881 .973 

Each value represents the mean statistical power (i.e., the proportion of simulated experiments with a significant group p-value) from nine simulations with varying degrees of between- and within-subjects variability, each simulation comprising 1000 simulated experiments. Each experiment consisted of 30 sessions (60 trials each), which were divided into either 1, 2, or 3 sessions per participant, with the trials then being pooled across sessions for each participant. For all effect sizes (except for the control simulations with no effect), simulations with fewer participants but more sessions per participant showed significantly greater statistical powers compared with experiments with more participants but fewer sessions per participant. These improvements in power were present for both the idealistic (chi-square goodness-of-fit tests; χ2 = 0.94, 23.57, 51.76, 73.88, and 54.18 for no effect, weak, moderate, strong, and very strong, respectively; p = .624 for no effect and p < .001 for all other effect sizes) and MEP-like data (χ2 = 0.05, 62.25, 86.53, 42.81, and 11.58, respectively; p = .974 for no effect, p < .001 for weak, moderate, and strong effects, and p = .003 for very strong effect).

Fig. 2.

Estimated Statistical Powers for Linear Regression Analyses Using Simulated Idealistic or MEP-like Sinusoidal Data. Simulations using idealistic sinusoidal data are shown on the left, whilst simulations using MEP-like data are shown on the right. Each value represents the mean statistical power (i.e., the proportion of simulated experiments with a significant group p-value) from nine simulations with varying degrees of between- and within-subjects variability, each simulation comprising 1000 simulated experiments. Each experiment consisted of 30 sessions (60 trials each), which were divided into either 1, 2, or 3 sessions per participant, with the trials then being pooled across sessions for each participant. Each colour represents a different minimum effect size (blue = no effect, orange = weak, yellow = moderate, purple = strong, green = very strong). For all effect sizes (except for the control simulations with no effect), simulations with fewer participants but more sessions per participant showed significantly greater statistical powers compared with experiments with more participants but fewer sessions per participant. These improvements in power were present for both the idealistic (chi-square goodness-of-fit tests; χ2 = 0.94, 23.57, 51.76, 73.88, and 54.18 for no effect, weak, moderate, strong, and very strong, respectively; p = .624 for no effect and p < .001 for all other effect sizes) and MEP-like data (χ2 = 0.05, 62.25, 86.53, 42.81, and 11.58, respectively; p = .974 for no effect, p < .001 for weak, moderate, and strong effects, and p = .003 for very strong effect).

Fig. 2.

Estimated Statistical Powers for Linear Regression Analyses Using Simulated Idealistic or MEP-like Sinusoidal Data. Simulations using idealistic sinusoidal data are shown on the left, whilst simulations using MEP-like data are shown on the right. Each value represents the mean statistical power (i.e., the proportion of simulated experiments with a significant group p-value) from nine simulations with varying degrees of between- and within-subjects variability, each simulation comprising 1000 simulated experiments. Each experiment consisted of 30 sessions (60 trials each), which were divided into either 1, 2, or 3 sessions per participant, with the trials then being pooled across sessions for each participant. Each colour represents a different minimum effect size (blue = no effect, orange = weak, yellow = moderate, purple = strong, green = very strong). For all effect sizes (except for the control simulations with no effect), simulations with fewer participants but more sessions per participant showed significantly greater statistical powers compared with experiments with more participants but fewer sessions per participant. These improvements in power were present for both the idealistic (chi-square goodness-of-fit tests; χ2 = 0.94, 23.57, 51.76, 73.88, and 54.18 for no effect, weak, moderate, strong, and very strong, respectively; p = .624 for no effect and p < .001 for all other effect sizes) and MEP-like data (χ2 = 0.05, 62.25, 86.53, 42.81, and 11.58, respectively; p = .974 for no effect, p < .001 for weak, moderate, and strong effects, and p = .003 for very strong effect).

Close modal

The sensitivity of each combination of parameters to false positives was assessed using a binomial test, which found that the minimum threshold for under-sensitivity was 39 out of 1000 experiments (.039), with p = .064. The maximum threshold for over-sensitivity was 61 out of 1000 experiments (.061), again with p = .064. All of the powers from the no effect simulations fell within these thresholds, and thus, no combination of parameters was deemed to be over- or under-sensitive.

Neuroscience and, in particular, NIBS studies are often criticised for their relatively low statistical powers (Button et al., 2013), which have traditionally been attributed to low participant sample sizes (Lafon et al., 2017; Minarik et al., 2016; Mitra et al., 2019; Vöröslakos et al., 2018). However, it has recently been suggested that in many cases, such as when detecting phasic effects on brain activity, the trial sample size for each participant may be of equal, if not greater, importance than the participant sample size itself (Baker et al., 2021; Grice et al., 2017; Normand, 2016; Rouder & Haaf, 2018; Smith & Little, 2018; Xu et al., 2018; Zoefel et al., 2019). The present stimulation study aimed to directly compare the relative importance of participant sample size and trial sample size per participant for detecting phasic effects of NIBS via a linear regression permutation analysis. To this end, we compared the statistical powers of simulated experiments with the same number of total experiment sessions (30) but different proportions of participants and number of sessions per participant (30 participants × 1 session, 15 participants × 2 sessions, and 10 participants × 3 sessions), with the trials being pooled across sessions for each participant. These simulations were performed for two types of outcome variables (idealistic and MEP like) and four different effect sizes (0.075—weak, 0.1—moderate, 0.125—strong, 0.15—very strong), as well as a separate control with no true effect. The chi-square goodness-of-fit tests revealed that for both data types and all effect sizes (except for the control simulations with no effect), experiments with fewer participants but more sessions (i.e., more trials) per participant showed significantly greater statistical powers compared with experiments with more participants but fewer sessions per participant, supporting our initial hypothesis. Further, the binomial test confirmed that none of the simulated experiments were susceptible to over- or under-sensitivity.

In the case of the idealistic data, the benefit of trials over participants appears to increase as the effect size increases. This is particularly relevant in the context of NIBS research, which traditionally had a history of weak effect sizes that may still be prone to type II errors even with optimised number of trials and participants (Button et al., 2013; Gundlach et al., 2016; Neuling et al., 2012; Riecke, Formisano, et al., 2015; Riecke et al., 2018; Riecke, Sack, et al., 2015; Riecke & Zoefel, 2018; Wilsch et al., 2018; Zoefel et al., 2018). Despite this, our results clearly suggest that the benefits of trials over participants are still significant even at weaker effect sizes. For the MEP-like data, however, it appears the benefit is of a similar magnitude irrespective of effect size. Therefore, no matter what capacity tACS has to modulate MEP amplitudes in a phasic manner, there is a benefit to sampling more trials rather than more participants.

There are several factors to consider when designing an experiment involving multiple sessions per participant, since it can introduce some practical issues. The first, and perhaps most impactful, of these issues is dropout/attrition: partially completed datasets have to be abandoned if participants withdraw from the study before completing all of their sessions. It is also important to consider the length of time between consecutive sessions, both for the participants’ safety and to minimise any carryover effects between sessions (Alharbi et al., 2017; Brunoni & Fregni, 2011). On a similar note, if the experiment involves a task where performance is quantitatively assessed, it is important to consider the possibility of training effects that may occur as the participant gains more practice with the task with repeated sessions. Finally, researchers should strive to minimise any other controllable variables between each participants’ sessions (time of day, caffeine intake etc.). The severity of these challenges only worsens as the number of sessions per participant increases, and so researchers should consider what the ideal number of sessions per participant would be to minimise any practical issues whilst still achieving the desired trial sample size. In some cases, it is possible to achieve larger number of trials by increasing the length of each session, thus reducing the number of sessions needed to achieve the desired trial sample size. However, this too involves some practical issues that need to be considered, such as participant and/or experimenter fatigue resulting in low-quality data.

Despite the challenges associated with performing multiple sessions per participant, the results of these simulations suggest that the increases in statistical power are worth the small cost of these additional challenges. Furthermore, performing multiple sessions per participant also offers some practical advantages. For example, the setup time at the start of each session is generally reduced after a participant has completed at least one session, since the participant is already familiar with the procedure and equipment. Another advantage in subsequent sessions that is more specific to TMS is that it is easier to determine both the location of the participant’s “hot-spot” (Rossini et al., 1994) for the targeted muscle and the stimulation intensity required to consistently induce MEPs in that muscle that are around the desired baseline amplitude (e.g., 1 mV; Cuypers et al., 2014; Ogata et al., 2019; Thies et al., 2018).

It is important to note that the simulations performed in this study assume that for each participant, the effect sizes are approximately similar across their sessions (though this variability was manipulated across several plausible ranges for completeness). Further, there may also be differences in the quality of the data between sessions, and the impact of this on statistical power is not directly assessed in the current study. However, it is also worth noting though that whilst consistency across sessions may be relevant for assessing the amplitude of a sinusoidal effect, it is less relevant for the frequency of the effect since phase is equivalent for any frequency. This means that individual sessions can still be pooled by phase even if there is a slight difference in the frequency of the sinusoidal effect between sessions.

The results of this simulation study highlight the importance of trial sample size for detecting phasic effects on brain activity. Our findings suggest that if there are limitations to the number of trials that can be obtained in a single experiment session (such as for phase-dependent TMS), conducting repeated experiment sessions on a smaller number of participants can be a useful strategy that allows researchers to obtain sufficiently large trial sample sizes and ensure accurate estimation and model fitting at the individual level. Although our simulation was originally designed in the context of our recent human tACS study (Geffen et al., 2021), effects of oscillatory phase can be found across a wide range of biomedical research fields, including but not limited to chronobiology (Kuhlman et al., 2018), biochemistry (Uhlén & Fritz, 2010), and cardiology (Tiwari et al., 2021). Further, the considerations regarding trial sample size vs. participant sample size are applicable to a wide variety of non-oscillatory experimental approaches in human neuroscience, psychology, and physiology. We, therefore, invite researchers to utilise the simulation code provided to aid in estimating statistical power for their own experimental designs using different proportions of participants and trials/sessions.

The MATLAB code used to perform the simulations in this study is available at https://osf.io/vwysk/ (https://doi.org/10.17605/OSF.IO/VWYSK)

A.G.: Conceptualisation, Methodology, Software, Formal Analysis, Investigation, Data Curation, Writing—Original Draft, Visualisation. N.B.: Conceptualisation, Methodology, Software, Data Curation, Writing—Review & Editing, Visualisation, Supervision. M.V.S.: Conceptualisation, Methodology, Validation, Resources, Writing—Review & Editing, Project Administration, Funding Acquisition, Supervision.

This work was supported by the US Office of Naval Research Global [grant number N62909-17-1-2139] awarded to M.V.S. The funding body had no involvement in the study design; the collection, analysis, and interpretation of data; the writing of the report; or the decision to submit the article for publication.

The authors have no relevant financial or non-financial interests to disclose.

We would like to thank Simoné Reinders as her honours thesis was a major source of inspiration in the design of this study.

Supplementary material for this article is available with the online version here: https://doi.org/10.1162/imag_a_00345.

Alharbi
,
M. F.
,
Armijo-Olivo
,
S.
, &
Kim
,
E. S.
(
2017
).
Transcranial direct current stimulation (tDCS) to improve naming ability in post-stroke aphasia: A critical review
.
Behavioural Brain Research
,
332
,
7
15
. https://doi.org/10.1016/j.bbr.2017.05.050
Baker
,
D. H.
,
Vilidaite
,
G.
,
Lygo
,
F. A.
,
Smith
,
A. K.
,
Flack
,
T. R.
,
Gouws
,
A. D.
, &
Andrews
,
T. J.
(
2021
).
Power contours: Optimising sample size and precision in experimental psychology and human neuroscience
.
Psychological Methods
,
26
(
3
),
295
314
. https://doi.org/10.1037/met0000337
Begemann
,
M. J.
,
Brand
,
B. A.
,
Ćurčić-Blake
,
B.
,
Aleman
,
A.
, &
Sommer
,
I. E.
(
2020
).
Efficacy of non-invasive brain stimulation on cognitive functioning in brain disorders: A meta-analysis
.
Psychological Medicine
,
50
(
15
),
2465
2486
. https://doi.org/10.1017/S0033291720003670
Bland
,
N. S.
, &
Sale
,
M. V.
(
2019
).
Current challenges: The ups and downs of tACS
.
Experimental Brain Research
,
237
(
12
),
3071
3088
. https://doi.org/10.1007/s00221-019-05666-0
Brunoni
,
A. R.
, &
Fregni
,
F.
(
2011
).
Clinical trial design in non-invasive brain stimulation psychiatric research
.
International Journal of Methods in Psychiatric Research
,
20
(
2
),
e19
30
. https://doi.org/10.1002/mpr.338
Button
,
K. S.
,
Ioannidis
,
J. P. A.
,
Mokrysz
,
C.
,
Nosek
,
B. A.
,
Flint
,
J.
,
Robinson
,
E. S. J.
, &
Munafò
,
M. R.
(
2013
).
Power failure: Why small sample size undermines the reliability of neuroscience
.
Nature Reviews Neuroscience
,
14
(
5
),
365
376
. https://doi.org/10.1038/nrn3475
Capaday
,
C.
(
2021
).
On the variability of motor-evoked potentials: Experimental results and mathematical model
.
Experimental Brain Research
,
239
(
10
),
2979
2995
. https://doi.org/10.1007/s00221-021-06169-7
Cuypers
,
K.
,
Thijs
,
H.
, &
Meesen
,
R. L. J.
(
2014
).
Optimization of the transcranial magnetic stimulation protocol by defining a reliable estimate for corticospinal excitability
.
PLoS One
,
9
(
1
),
e86380
e86380
. https://doi.org/10.1371/journal.pone.0086380
de Boer
,
N. S.
,
Schluter
,
R. S.
,
Daams
,
J. G.
,
van der Werf
,
Y. D.
,
Goudriaan
,
A. E.
, &
van Holst
,
R. J.
(
2021
).
The effect of non-invasive brain stimulation on executive functioning in healthy controls: A systematic review and meta-analysis
.
Neuroscience & Biobehavioral Reviews
,
125
,
122
147
. https://doi.org/10.1016/j.neubiorev.2021.01.013
Elyamany
,
O.
,
Leicht
,
G.
,
Herrmann
,
C. S.
, &
Mulert
,
C.
(
2021
).
Transcranial alternating current stimulation (tACS): From basic mechanisms towards first applications in psychiatry
.
European Archives of Psychiatry and Clinical Neuroscience
,
271
(
1
),
135
156
. https://doi.org/10.1007/s00406-020-01209-9
Evancho
,
A.
,
Tyler
,
W. J.
, &
McGregor
,
K.
(
2023
).
A review of combined neuromodulation and physical therapy interventions for enhanced neurorehabilitation
.
Frontiers in Human Neuroscience
,
17
,
1151218
. https://doi.org/10.3389/fnhum.2023.1151218
Fehér
,
K. D.
,
Nakataki
,
M.
, &
Morishima
,
Y.
(
2017
).
Phase-dependent modulation of signal transmission in cortical networks through tACS-induced neural oscillations
.
Frontiers in Human Neuroscience
,
11
,
471
. https://doi.org/10.3389/fnhum.2017.00471
Fehér
,
K. D.
,
Nakataki
,
M.
, &
Morishima
,
Y.
(
2022
).
Phase-synchronized transcranial alternating current stimulation-induced neural oscillations modulate cortico-cortical signaling efficacy
.
Brain Connectivity
,
12
(
5
),
443
453
. https://doi.org/10.1089/brain.2021.0006
Fisher
,
R. A.
(
1992
).
Statistical methods for research workers
. In
S.
Kotz
&
N. L.
Johnson
(Eds.),
Breakthroughs in statistics: Methodology and distribution
(pp.
66
70
).
Springer New York
. https://doi.org/10.1007/978-1-4612-4380-9_6
Gandevia
,
S. C.
, &
Rothwell
,
J. C.
(
1987
).
Activation of the human diaphragm from the motor cortex
.
The Journal of Physiology
,
384
,
109
118
. https://doi.org/10.1113/jphysiol.1987.sp016445
Geffen
,
A.
,
Bland
,
N.
, &
Sale
,
M. V.
(
2021
).
Effects of slow oscillatory transcranial alternating current stimulation on motor cortical excitability assessed by transcranial magnetic stimulation
.
Frontiers in Human Neuroscience
,
15
,
726604
. https://doi.org/10.3389/fnhum.2021.726604
Grey
,
M.
, &
van de Ruit
,
M.
(
2017
).
P085 MEP variability associated with coil pitch and roll using single-pulse TMS
.
Clinical Neurophysiology
,
128
(
3
),
e50
. https://doi.org/10.1016/j.clinph.2016.10.210
Grice
,
J.
,
Barrett
,
P.
,
Cota
,
L.
,
Felix
,
C.
,
Taylor
,
Z.
,
Garner
,
S.
,
Medellin
,
E.
, &
Vest
,
A.
(
2017
).
Four bad habits of modern psychologists
.
Behavioral Sciences
,
7
(
3
),
53
. https://doi.org/10.3390/bs7030053
Gundlach
,
C.
,
Müller
,
M. M.
,
Nierhaus
,
T.
,
Villringer
,
A.
, &
Sehm
,
B.
(
2016
).
Phasic modulation of human somatosensory perception by transcranially applied oscillating currents
.
Brain Stimulation
,
9
(
5
),
712
719
. https://doi.org/10.1016/j.brs.2016.04.014
Hashimoto
,
R.
, &
Rothwell
,
J. C.
(
1999
).
Dynamic changes in corticospinal excitability during motor imagery
.
Experimental Brain Research
,
125
(
1
),
75
81
. https://doi.org/10.1007/s002210050660
Huang
,
Y.
,
Liu
,
A. A.
,
Lafon
,
B.
,
Friedman
,
D.
,
Dayan
,
M.
,
Wang
,
X.
,
Bikson
,
M.
,
Doyle
,
W. K.
,
Devinsky
,
O.
, &
Parra
,
L. C.
(
2017
).
Measurements and models of electric fields in the in vivo human brain during transcranial electric stimulation
.
eLife
,
6
,
e18834
. https://doi.org/10.7554/eLife.18834
Janssens
,
S. E. W.
, &
Sack
,
A. T.
(
2021
).
Spontaneous fluctuations in oscillatory brain state cause differences in transcranial magnetic stimulation effects within and between individuals
.
Frontiers in Human Neuroscience
,
15
,
802244
. https://doi.org/10.3389/fnhum.2021.802244
Kasten
,
F. H.
, &
Herrmann
,
C. S.
(
2019
).
Recovering brain dynamics during concurrent tACS-M/EEG: An overview of analysis approaches and their methodological and interpretational pitfalls
.
Brain Topography
,
32
(
6
),
1013
1019
. https://doi.org/10.1007/s10548-019-00727-7
Kuhlman
,
S. J.
,
Craig
,
L. M.
, &
Duffy
,
J. F.
(
2018
).
Introduction to chronobiology
.
Cold Spring Harbor Perspectives in Biology
,
10
(
9
),
a033613
. https://doi.org/10.1101/cshperspect.a033613
Lafon
,
B.
,
Henin
,
S.
,
Huang
,
Y.
,
Friedman
,
D.
,
Melloni
,
L.
,
Thesen
,
T.
,
Doyle
,
W.
,
Buzsaki
,
G.
,
Devinsky
,
O.
,
Parra
,
L. C.
, &
A
,
A. L.
(
2017
).
Low frequency transcranial electrical stimulation does not entrain sleep rhythms measured by human intracranial recordings
.
Nature Communications
,
8
(
1
),
1199
. https://doi.org/10.1038/s41467-017-01045-x
Lewis
,
P. M.
,
Thomson
,
R. H.
,
Rosenfeld
,
J. V.
, &
Fitzgerald
,
P. B.
(
2016
).
Brain neuromodulation techniques: A review
.
The Neuroscientist
,
22
(
4
),
406
421
. https://doi.org/10.1177/1073858416646707
Minarik
,
T.
,
Berger
,
B.
,
Althaus
,
L.
,
Bader
,
V.
,
Biebl
,
B.
,
Brotzeller
,
F.
,
Fusban
,
T.
,
Hegemann
,
J.
,
Jesteadt
,
L.
,
Kalweit
,
L.
,
Leitner
,
M.
,
Linke
,
F.
,
Nabielska
,
N.
,
Reiter
,
T.
,
Schmitt
,
D.
,
Spraetz
,
A.
, &
Sauseng
,
P.
(
2016
).
The importance of sample size for reproducibility of tDCS effects
.
Frontiers in Human Neuroscience
,
10
,
453
. https://doi.org/10.3389/fnhum.2016.00453
Mitra
,
S.
,
Mehta
,
U. M.
,
Binukumar
,
B.
,
Venkatasubramanian
,
G.
, &
Thirthalli
,
J.
(
2019
).
Statistical power estimation in non-invasive brain stimulation studies and its clinical implications: An exploratory study of the meta-analyses
.
Asian Journal of Psychiatry
,
44
,
29
34
. https://doi.org/10.1016/j.ajp.2019.07.006
Nakazono
,
H.
,
Ogata
,
K.
,
Takeda
,
A.
,
Yamada
,
E.
,
Oka
,
S.
, &
Tobimatsu
,
S.
(
2021
).
A specific phase of transcranial alternating current stimulation at the β frequency boosts repetitive paired-pulse TMS-induced plasticity
.
Scientific Reports
,
11
(
1
),
13179
. https://doi.org/10.1038/s41598-021-92768-x
Neuling
,
T.
,
Rach
,
S.
,
Wagner
,
S.
,
Wolters
,
C. H.
, &
Herrmann
,
C. S.
(
2012
).
Good vibrations: Oscillatory phase shapes perception
.
Neuroimage
,
63
(
2
),
771
778
. https://doi.org/10.1016/j.neuroimage.2012.07.024
Niyazov
,
D. M.
,
Butler
,
A. J.
,
Kadah
,
Y. M.
,
Epstein
,
C. M.
, &
Hu
,
X. P.
(
2005
).
Functional magnetic resonance imaging and transcranial magnetic stimulation: Effects of motor imagery, movement and coil orientation
.
Clinical Neurophysiology
,
116
(
7
),
1601
1610
. https://doi.org/10.1016/j.clinph.2005.02.028
Normand
,
M. P.
(
2016
).
Less is more: Psychologists can learn more by studying fewer people
.
Frontiers in Psychology
,
7
,
934
. https://doi.org/10.3389/fpsyg.2016.00934
Noury
,
N.
,
Hipp
,
J. F.
, &
Siegel
,
M.
(
2016
).
Physiological processes non-linearly affect electrophysiological recordings during transcranial electric stimulation
.
Neuroimage
,
140
,
99
109
. https://doi.org/10.1016/j.neuroimage.2016.03.065
Noury
,
N.
, &
Siegel
,
M.
(
2017
).
Phase properties of transcranial electrical stimulation artifacts in electrophysiological recordings
.
Neuroimage
,
158
,
406
416
. https://doi.org/10.1016/j.neuroimage.2017.07.010
Ogata
,
K.
,
Nakazono
,
H.
,
Uehara
,
T.
, &
Tobimatsu
,
S.
(
2019
).
Prestimulus cortical EEG oscillations can predict the excitability of the primary motor cortex
.
Brain Stimulation
,
12
(
6
),
1508
1516
. https://doi.org/10.1016/j.brs.2019.06.013
Open Science Collaboration
. (
2015
).
Estimating the reproducibility of psychological science
.
Science
,
349
(
6251
),
aac4716
. https://doi.org/10.1126/science.aac4716
Opitz
,
A.
,
Falchier
,
A.
,
Yan
,
C.-G.
,
Yeagle
,
E. M.
,
Linn
,
G. S.
,
Megevand
,
P.
,
Thielscher
,
A.
,
A
Deborah
, R.,
Milham
,
M. P.
,
Mehta
,
A. D.
, &
Schroeder
,
C. E.
(
2016
).
Spatiotemporal structure of intracranial electric fields induced by transcranial electric stimulation in humans and nonhuman primates
.
Scientific Reports
,
6
(
1
),
31236
. https://doi.org/10.1038/srep31236
Piccoli
,
E.
,
Cerioli
,
M.
,
Castiglioni
,
M.
,
Larini
,
L.
,
Scarpa
,
C.
, &
Dell’Osso
,
B.
(
2022
).
Recent innovations in non-invasive brain stimulation (NIBS) for the treatment of unipolar and bipolar depression: A narrative review
.
International Review of Psychiatry
,
34
(
7–8
),
715
726
. https://doi.org/10.1080/09540261.2022.2132137
Qi
,
F.
,
Nitsche
,
M.A.
,
Ren
,
X.
,
Wang
,
D.
, &
Wang
,
L.
(
2023
).
Top-down and bottom-up stimulation techniques combined with action observation treatment in stroke rehabilitation: A perspective
.
Frontiers in Neurology
,
14
,
1156987
. https://doi.org/10.3389/fneur.2023.1156987
Raco
,
V.
,
Bauer
,
R.
,
Tharsan
,
S.
, &
Gharabaghi
,
A.
(
2016
).
Combining TMS and tACS for closed-loop phase-dependent modulation of corticospinal excitability: A feasibility study
.
Frontiers in Cellular Neuroscience
,
10
,
143
. https://doi.org/10.3389/fncel.2016.00143
Riecke
,
L.
,
Formisano
,
E.
,
Herrmann
,
C. S.
, &
Sack
,
A. T.
(
2015
).
4-Hz transcranial alternating current stimulation phase modulates hearing
.
Brain Stimulation
,
8
(
4
),
777
783
. https://doi.org/10.1016/j.brs.2015.04.004
Riecke
,
L.
,
Formisano
,
E.
,
Sorger
,
B.
,
Başkent
,
D.
, &
Gaudrain
,
E.
(
2018
).
Neural entrainment to speech modulates speech intelligibility
.
Current Biology
,
28
(
2
),
161
169.e165
. https://doi.org/10.1016/j.cub.2017.11.033
Riecke
,
L.
,
Sack
,
A. T.
, &
Schroeder
,
C. E.
(
2015
).
Endogenous delta/theta sound-brain phase entrainment accelerates the buildup of auditory streaming
.
Current Biology
,
25
(
24
),
3196
3201
. https://doi.org/10.1016/j.cub.2015.10.045
Riecke
,
L.
, &
Zoefel
,
B.
(
2018
).
Conveying temporal information to the auditory system via transcranial current stimulation
.
Acta Acustica United with Acustica
,
104
(
5
),
883
886
. https://doi.org/10.3813/AAA.919235
Rossini
,
P. M.
,
Barker
,
A. T.
,
Berardelli
,
A.
,
Caramia
,
M. D.
,
Caruso
,
G.
,
Cracco
,
R. Q.
,
Dimitrijević
,
M. R.
,
Hallett
,
M.
,
Katayama
,
Y.
,
Lücking
,
C. H.
,
Maertens de Noordhout
,
A. L.
,
Marsden
,
C. D.
,
Murray
,
N. M. F.
,
Rothwell
,
J. C.
,
Swash
,
M.
, &
Tomberg
,
C.
(
1994
).
Non-invasive electrical and magnetic stimulation of the brain, spinal cord and roots: Basic principles and procedures for routine clinical application. Report of an IFCN committee
.
Electroencephalography and Clinical Neurophysiology
,
91
(
2
),
79
92
. https://doi.org/10.1016/0013-4694(94)90029-9
Rouder
,
J. N.
, &
Haaf
,
J. M.
(
2018
).
Power, dominance, and constraint: A note on the appeal of different design traditions
.
Advances in Methods and Practices in Psychological Science
,
1
(
1
),
19
26
. https://doi.org/10.1177/2515245917745058
Schaworonkow
,
N.
,
Gordon
Caldana
, P.,
Belardinelli
,
P.
,
Ziemann
,
U.
,
Bergmann
,
T. O.
, &
Zrenner
,
C.
(
2018
).
μ-Rhythm extracted with personalized EEG filters correlates with corticospinal excitability in real-time phase-triggered EEG-TMS
.
Frontiers in Neuroscience
,
12
(
954
). https://doi.org/10.3389/fnins.2018.00954
Schaworonkow
,
N.
,
Triesch
,
J.
,
Ziemann
,
U.
, &
Zrenner
,
C.
(
2019
).
EEG-triggered TMS reveals stronger brain state-dependent modulation of motor evoked potentials at weaker stimulation intensities
.
Brain Stimulation
,
12
(
1
),
110
118
. https://doi.org/10.1016/j.brs.2018.09.009
Schilberg
,
L.
,
Engelen
,
T.
,
ten Oever
,
S.
,
Schuhmann
,
T.
,
de Gelder
,
B.
,
de Graaf
,
T. A.
, &
Sack
,
A. T.
(
2018
).
Phase of beta-frequency tACS over primary motor cortex modulates corticospinal excitability
.
Cortex
,
103
,
142
152
. https://doi.org/10.1016/j.cortex.2018.03.001
Smith
,
P. L.
, &
Little
,
D. R.
(
2018
).
Small is beautiful: In defense of the small-N design
.
Psychonomic Bulletin & Review
,
25
(
6
),
2083
2101
. https://doi.org/10.3758/s13423-018-1451-8
Thies
,
M.
,
Zrenner
,
C.
,
Ziemann
,
U.
, &
Bergmann
,
T. O.
(
2018
).
Sensorimotor mu-alpha power is positively related to corticospinal excitability
.
Brain Stimulation
,
11
(
5
),
1119
1122
. https://doi.org/10.1016/j.brs.2018.06.006
Tiwari
,
R.
,
Kumar
,
R.
,
Malik
,
S.
,
Raj
,
T.
, &
Kumar
,
P.
(
2021
).
Analysis of heart rate variability and implication of different factors on heart rate variability
.
Current Cardiology Reviews
,
17
(
5
),
e160721189770
. https://doi.org/10.2174/1573403x16999201231203854
Turner
,
B. O.
,
Paul
,
E. J.
,
Miller
,
M. B.
, &
Barbey
A. K.
(
2018
).
Small sample sizes reduce the replicability of task-based fMRI studies
.
Communications Biology
,
1
,
62
. https://doi.org/10.1038/s42003-018-0073-z
Uhlén
,
P.
, &
Fritz
,
N.
(
2010
).
Biochemistry of calcium oscillations
.
Biochemical and Biophysical Research Communications
,
396
(
1
),
28
32
. https://doi.org/10.1016/j.bbrc.2010.02.117
Vicario
,
C. M.
,
Salehinejad
,
M. A.
,
Felmingham
,
K.
,
Martino
,
G.
, &
Nitsche
,
M. A.
(
2019
).
A systematic review on the therapeutic effectiveness of non-invasive brain stimulation for the treatment of anxiety disorders
.
Neuroscience & Biobehavioral Reviews
,
96
,
219
231
. https://doi.org/10.1016/j.neubiorev.2018.12.012
Vidaurre
,
D.
,
Smith
,
S. M.
, &
Woolrich
,
M. W.
(
2017
).
Brain network dynamics are hierarchically organized in time
.
Proceedings of the National Academy of Sciences of the United States of America
,
114
(
48
),
12827
12832
. https://doi.org/10.1073/pnas.1705120114
Vöröslakos
,
M.
,
Takeuchi
,
Y.
,
Brinyiczki
,
K.
,
Zombori
,
T.
,
Oliva
,
A.
,
Fernández-Ruiz
,
A.
,
Kozák
,
G.
,
Kincses
,
Z. T.
,
Iványi
,
B.
,
Buzsáki
,
G.
, &
Berényi
,
A.
(
2018
).
Direct effects of transcranial electric stimulation on brain circuits in rats and humans
.
Nature Communications
,
9
(
1
),
483
. https://doi.org/10.1038/s41467-018-02928-3
Vosskuhl
,
J.
,
Strüber
,
D.
, &
Herrmann
,
C. S.
(
2018
).
Non-invasive brain stimulation: A paradigm shift in understanding brain oscillations
.
Frontiers in Human Neuroscience
,
12
,
211
. https://doi.org/10.3389/fnhum.2018.00211
Wilsch
,
A.
,
Neuling
,
T.
,
Obleser
,
J.
, &
Herrmann
,
C. S.
(
2018
).
Transcranial alternating current stimulation with speech envelopes modulates speech comprehension
.
Neuroimage
,
172
,
766
774
. https://doi.org/10.1016/j.neuroimage.2018.01.038
Wischnewski
,
M.
,
Alekseichuk
,
I.
, &
Opitz
,
A.
(
2023
).
Neurocognitive, physiological, and biophysical effects of transcranial alternating current stimulation
.
Trends in Cognitive Sciences
,
27
(
2
),
189
205
. https://doi.org/10.1016/j.tics.2022.11.013
Xu
,
Z.
,
Adam
,
K. C. S.
,
Fang
,
X.
, &
Vogel
,
E. K.
(
2018
).
The reliability and stability of visual working memory capacity
.
Behavior Research Methods
,
50
(
2
),
576
588
. https://doi.org/10.3758/s13428-017-0886-6
Yang
,
G.
,
Guo
,
L.
,
Zhang
,
Y.
, &
Li
,
S.
(
2024
).
Network meta-analysis of non-pharmacological interventions for cognitive impairment after an ischemic stroke
.
Frontiers in Neurology
,
15
,
1327065
. https://doi.org/10.3389/fneur.2024.1327065
Zalesky
,
A.
,
Fornito
,
A.
,
Cocchi
,
L.
,
Gollo
,
L. L.
, &
Breakspear
,
M.
(
2014
).
Time-resolved resting-state brain networks
.
Proceedings of the National Academy of Sciences of the United States of America
,
111
(
28
),
10341
10346
. https://doi.org/10.1073/pnas.1400181111
Zoefel
,
B.
,
Archer-Boyd
,
A.
, &
Davis
,
M. H.
(
2018
).
Phase entrainment of brain oscillations causally modulates neural responses to intelligible speech
.
Current Biology
,
28
(
3
),
401
408.e405
. https://doi.org/10.1016/j.cub.2017.11.071
Zoefel
,
B.
,
Davis
,
M. H.
,
Valente
,
G.
, &
Riecke
,
L.
(
2019
).
How to test for phasic modulation of neural and behavioural responses
.
bioRxiv
,
517243
. https://doi.org/10.1101/517243
Zrenner
,
C.
,
Belardinelli
,
P.
,
Ermolova
,
M.
,
Gordon
,
P. C.
,
Stenroos
,
M.
,
Zrenner
,
B.
, &
Ziemann
,
U.
(
2022
).
µ-rhythm phase from somatosensory but not motor cortex correlates with corticospinal excitability in EEG-triggered TMS
.
Journal of Neuroscience Methods
,
379
,
109662
. https://doi.org/10.1016/j.jneumeth.2022.109662
Zrenner
,
C.
,
Desideri
,
D.
,
Belardinelli
,
P.
, &
Ziemann
,
U.
(
2018
).
Real-time EEG-defined excitability states determine efficacy of TMS-induced plasticity in human motor cortex
.
Brain Stimulation
,
11
(
2
),
374
389
. https://doi.org/10.1016/j.brs.2017.11.016
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International (CC BY 4.0) license, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.

Supplementary data