Web-based experiments are gaining momentum in motor learning research because of the desire to increase statistical power, decrease overhead for human participant experiments, and utilize a more demographically inclusive sample population. However, there is a vital need to understand the general feasibility and considerations necessary to shift tightly controlled human participant experiments to an online setting. We developed and deployed an online experimental platform modeled after established in-laboratory visuomotor rotation experiments to serve as a case study examining remotely collected data quality for an 80-min experiment. Current online motor learning experiments have thus far not exceeded 60 min, and current online crowdsourced studies have a median duration of approximately 10 min. Thus, the impact of a longer-duration, web-based experiment is unknown. We used our online platform to evaluate perturbation-driven motor adaptation behavior under three rotation sizes (±10°, ±35°, and ±65°) and two sensory uncertainty conditions. We hypothesized that our results would follow predictions by the relevance estimation hypothesis. Remote execution allowed us to double (n = 49) the typical participant population size from similar studies. Subsequently, we performed an in-depth examination of data quality by analyzing single-trial data quality, participant variability, and potential temporal effects across trials. Results replicated in-laboratory findings and provided insight on the effect of induced sensory uncertainty on the relevance estimation hypothesis. Our experiment also highlighted several specific challenges associated with online data collection including potentially smaller effect sizes, higher data variability, and lower recommended experiment duration thresholds. Overall, online paradigms present both opportunities and challenges for future motor learning research.

Online experiments have risen in prevalence as a way to recruit and collect data quickly, with low resource expenditure, primarily in the social sciences. These studies have replicated well-known research in both psychology and economics (Palan & Schitter, 2018; Paolacci & Chandler, 2014), and have included both behavioral surveys and cognitive tasks based on RT (Crump, McDonnell, & Gureckis, 2013). In addition, the use of crowdsourcing platforms for recruitment such as Amazon's MTurk (MTurk) and Prolific Academic (ProA) have enabled access to larger sample populations and a wider range of population demographics than that of traditional in-laboratory studies (Palan & Schitter, 2018).

Current efforts have expanded online experiments into motor learning research (Kim, Forrence, & McDougle, 2022; Cesanek, Zhang, Ingram, Wolpert, & Flanagan, 2021; Bönstrup, Iturrate, Hebart, Censor, & Cohen, 2020; Kahn, Karuza, Vettel, & Bassett, 2018) and have demonstrated promise in terms of generalizability across experimental contexts (Kim et al., 2022; Wang, Avraham, Tsay, Thummala, & Ivry, 2022; Tsay, Kim, Saxena, et al., 2022a; Cesanek et al., 2021; Bönstrup et al., 2020). Furthermore, they have enabled increases in statistical power and provided insight on the distribution of data collected through these means. The shift toward online platforms potentially offers a way to circumvent the time- and resource-consuming nature of human participant testing by eliminating the use of high-cost equipment including robot manipulandums, sensors, and virtual reality systems. It also alleviates the necessity of onsite researchers for experiment administration. However, there is a vital need to understand the considerations required to shift tightly controlled human participant experiments to an online setting and the potential effect of that shift on experimental outcomes. Our central aims are to use a web-based experimental paradigm to (1) replicate in-laboratory findings, (2) evaluate the effect of induced sensory uncertainty on adaptation behavior to a range of perturbation sizes, and (3) examine the data quality and experimental considerations for an 80-min online visuomotor rotation (VMR) task. We propose that effectively utilizing online platforms can lead to rapid expansion and acceleration of motor learning research but requires thorough vetting and characterization of differences between in-laboratory and online experimental paradigms before utilization.

Background

Researchers have long theorized that humans learn motor skills by iteratively recalibrating, or adapting, internal (brain–body) models of specific movement patterns in response to perceived error (Körding & Wolpert, 2004). This process is often modeled from a Bayesian perspective because researchers postulate that humans predict and execute movements by optimally integrating knowledge from past movement outcomes (i.e., internal model) with incoming error information (i.e., sensory feedback; Körding & Wolpert, 2004; Wolpert & Ghahramani, 2000). The resulting movement then produces the outcome with the least variance. The Bayesian framework yields two interesting implications (Wei & Körding, 2010; Körding & Wolpert, 2004). The first implication claims that an increase in sensory feedback uncertainty increases reliance on the internal model and decreases the adaptation rate in response to perceived errors. Alternatively, the second implication claims that an increase in internal model uncertainty decreases trust in the internal model, thus increasing reliance on sensory feedback and increases the adaptation rate in response to perceived error.

Körding and Wolpert (2004) assessed the first implication with a hand-to-target task by training participants to reach their right index finger to a target in a virtual-reality environment. Cursor position feedback was displayed as a single dot, a medium blur, a large blur, or withheld entirely and was only available midway through each trial. The purpose of the four levels of feedback displays was to manipulate sensory (visual) feedback uncertainty from low (single dot) to high (withheld). The display of midway feedback was randomized from trial to trial, and endpoint feedback was only provided for low uncertainty (single dot) trials. A lateral shift, or perturbation, was added to the cursor position unknown to the participant and was reflected in the midway feedback. The subsequent lateral displacement of the final cursor position from the target on each trial was taken as the participant's estimate of the lateral shift on the previous trial.

Results verified that the influence of sensory feedback uncertainty on the final deviation from the target matched the behavior predicted by the first implication of the Bayesian framework (Körding & Wolpert, 2004). Additional support for the first implication is abundant (He et al., 2016; Johnson, Kording, Hargrove, & Sensinger, 2014; Burge, Ernst, & Banks, 2008). Our recent work substantiated the first implication for our remote, web-based experimental paradigm by verifying that when sensory feedback uncertainty was increased (small distribution vs. large distribution of a five-cursor cloud) in a cursor-to-target task, adaptation rate decreased (Shyr & Joshi, 2021), matching the results reported in Wei and Körding (2010).

Research for the second implication of the Bayesian framework is comparatively sparse and has continued to prove difficult to study as experimentally inducing internal model uncertainty often relies on altering external parameters such as consistency in the task environment (Izawa, Rane, Donchin, & Shadmehr, 2008) and manipulation of visual feedback (Wei & Körding, 2010). However, indirect evidence exists in support of the second implication. Robinson, Soetedjo, and Noto (2006) increased model uncertainty in monkeys by having them perform eye saccades in darkness. Thus, any new motor observations were perceived to be more precise than old information, which led to faster learning. Similarly, a handful of human studies also yielded an increase in adaptation rate by having participants undergo randomized blocks of perturbation-driven adaptation trials with veridical feedback, no feedback, or eyes closed and stationary (Wei & Körding, 2010), imposing a random walk between the cursor endpoint and visual feedback (Burge et al., 2008), or having participants perform a cursor-to-target task with targets of varying distances (He et al., 2016). In another study, Lyons and Joshi (2018) were able to show initial support for the second implication with an EMG-based control system in which noise was injected directly into the mapping between six EMG channels and a two-dimensional cursor position to simulate model uncertainty. As such, it was concluded that increasing model uncertainty increased adaptation rate.

A caveat in the majority of these studies is that they often consider only one perturbation size with a magnitude less than 30°. Table 1 provides maximum perturbation ratios, the ratio of the largest perturbation evaluated in a given study to the target distance (unitless) for representative motor adaptation studies examining the Bayesian framework. These studies seldom pass a maximum perturbation ratio of 0.250, indicating that their perturbations are at most, less than a quarter of the movement distance. Two studies that substantially surpassed the 0.250 mark are highlighted in Table 1. The first study (Wei & Körding, 2009) evaluated a wider range of perturbations (maximum perturbation ratio of 0.533) and demonstrated that adaptation behavior follows a sigmoidal shape with larger perturbations resulting in decreased error correction and, as a result, a diminished adaptation rate. Thus, they set forth the relevance estimation hypothesis (REH), which suggests that error relevance is a primary factor in driving adaptive behavior.

Table 1.

Overview of Motor Learning Research in Reference to Bayesian Motor Learning Models

Overview of Motor Learning Research in Reference to Bayesian Motor Learning Models
Overview of Motor Learning Research in Reference to Bayesian Motor Learning Models

The REH draws from the idea of causal inference or perception in cognitive science and suggests that the nervous system adapts nonlinearly with respect to error magnitude because the relevance of the error not only considers sensory cues from different modalities but interprets them in terms of their causes (Wei & Körding, 2009). This model relies on a Bayesian framework in which the central nervous system solves a credit-assignment problem to determine the relevance of sensory feedback in the presence of uncertainty in both the feedback and internal model. In their study, Wei and Körding (2009) demonstrated that in a VMR task, (1) adaptation behavior scaled linearly with small perturbations but became sublinear for larger perturbations and (2) reducing motor variability (decreasing internal model uncertainty) by having participants reach to a closer target enabled them to determine relevance more easily. Thus, the range of errors that yielded a linear-adaptive response was smaller for participants conducting reaches to a closer target.

The second study (Tsay, Avraham, Morehead, Kim, & Ivry, 2021) recently investigated the validity of the REH model (maximum perturbation ratio of 1.414) using a novel experimental paradigm aimed at isolating implicit motor adaptation. Their results showed nonlinearity between error correction (hand deviation between Trial k and Trial k + 1) and the size of experimental perturbations (in Trial k), thus supporting the REH. Furthermore, they hypothesized that if they were to test a range of perturbations under two sensory uncertainty conditions (single cursor and cursor cloud), there should be a point at which the cursor cloud data showed a greater corrective response than that seen for the single cursor data because of a lower likelihood that visual feedback was supposed irrelevant. This greater corrective response would result in a “crossover” between the single cursor and cursor cloud conditions as illustrated for perturbations ±p2 in Figure 1 and would lead to an extended linear relationship between hand deviation and perturbation for the cursor cloud compared with the single cursor. Tsay, Avraham, and colleagues (2021) did not observe any evidence of a crossover in their experiment. However, they noted statistical power and their novel methodology as a potential explanation. Their experiment included 24 participants, and they used a less intuitive motor learning task than that traditionally used in VMR studies in which participants were instructed of the nature of target-clamped perturbations and instructed to ignore the provided feedback.

Figure 1.

Hypothesis for the REH: The single cursor condition represents feedback without any added sensory uncertainty. The cursor cloud represents feedback where uncertainty is added by displaying feedback as a distribution of multiple cursors (cursor cloud). The hand deviation for the cursor cloud condition is displaying an extended linear relationship as well as a “crossover” at ±p2 where it is hypothesized that larger perturbations are less likely to be seen as irrelevant because of the added sensory uncertainty.

Figure 1.

Hypothesis for the REH: The single cursor condition represents feedback without any added sensory uncertainty. The cursor cloud represents feedback where uncertainty is added by displaying feedback as a distribution of multiple cursors (cursor cloud). The hand deviation for the cursor cloud condition is displaying an extended linear relationship as well as a “crossover” at ±p2 where it is hypothesized that larger perturbations are less likely to be seen as irrelevant because of the added sensory uncertainty.

Close modal

Because of the time-intensive nature of in-laboratory experiments, the sample size in Tsay, Avraham, and colleagues (2021) is common and is actually larger than that found in many motor learning studies. Table 1 provides the paradigm descriptions, experiment durations, sample sizes, demographic information, and recruitment methods, for several motor adaptation studies examining Bayesian motor learning models. Of the12 studies that included participant testing, only two between-subjects and two within-subject studies had more than 20 participants per experiment (and per group for the between-subjects experiments). In addition, although it is unknown how participants were recruited for these studies, studies generally recruit from the undergraduate population (Palan & Schitter, 2018). Thus, we suggest that the implementation of an online experimental platform would not only enable a larger sample size but also provide access to a more diverse sample population and strengthen statistical conclusions. This is an important consideration as the central nervous system and resulting motor skills are certainly impacted by demographic factors like age, gender, and physical activity levels (Jiménez-Jiménez et al., 2011; Rikli & Busch, 1986).

Table 2 summarizes recent online motor learning literature. The initial effort aimed at studying motor learning online was led by Kahn and colleagues in 2018. Kahn and colleagues (2018) studied how humans encode statistical regularities from complex, temporally structured stimuli using a self-paced RT task with participants recruited through MTurk. Subsequent efforts include a VMR experiment examining how the brain computes and integrates sensory prediction errors into the motor learning process in the absence of movement (Kim et al., 2022), an object lifting task evaluating how humans determine object dynamics based off motor memories (Cesanek et al., 2021), a procedural motor skill task demonstrating the learning effects of short rest periods taken during practice (Bönstrup et al., 2020), and a web-based (Tsay, Kim, Saxena, et al., 2022b) and in-laboratory (Tsay, Kim, Saxena, et al., 2022a) reaching experiment evaluating two sources of use-dependent biases in motor execution. Work by Tsay, Kim, Saxena, and colleagues (2022a, 2022b) indicated comparable results between web-based and in-laboratory paradigms, and work by Wang and colleagues (2022) was able to replicate the well-established effect that adaptation is greater for continuous feedback than standard-endpoint feedback. Last, our recent work (Shyr & Joshi, 2021) validated results from Wei & Körding, (2010) utilizing a web-based cursor-to-target paradigm and participants recruited through university pipelines. This work assessed how induced visual feedback uncertainty affected trial-by-trial motor adaptation behavior (first implication of the Bayesian framework).

Table 2.

Summary of Online Motor Learning Literature

PublicationResearch FocusParadigm Description (Laboratory or Online)Experiment Duration (Sigle Session)Maximum Perturbation RatioPerturbation SizeTarget DistanceSubject Population
Sample SizeDemographic InformationRecruitment Method & Compensation
Bönstrup et al. (2020Demonstrates the effect of short rest periods during practice of a procedural motor skill task on consolidation for early skill learning Online: procedural motor skill task with keyboard Exp 1: 12 min Exp 2: 8 min Exp 3: 12 min Exp 4: 12 min N/A Exp 1: 389 Exp 2: 3 groups; 118, 126, 129 participants/group Exp 3: 118 Exp 4: 71 Exp 1: 39.6 ± 0.56 years old; 224 F Exp 2: 38.8 ± 0.65 years old; 239 F Exp 3: 35.4 ± 1.00 years old; 67 F Exp 4: 35.63 ± 1.14 years old; 37 F All: naive, right-handed, >95% approval rate on MTurk, located in the United States Amazon's Mechanical Turk Exp 1, 3, 4: $2 Exp 2: $1.50 *equivalent to $8/hr 
Cesanek et al. (2021Examines how participants learn weights of objects when repeatedly lifted Online: object lifting task with mouse or trackpad Laboratory: object lifting task with robotic manipulandum, virtual reality headset, USB response pad Online: Unknown Laboratory: 45 min N/A Exp (online): 4 groups (37, 36, 37, 39 participants/group) Exp 1 (laboratory): 2 groups; 15 participants/group Exp 2 (laboratory): 3 groups; 9 participants/group Exp 3 (laboratory): 2 groups; 11 participants/group Online (1 Exp): 19–70 years old (median 31.5), 60 F, 135 M, 1 nonbinary Laboratory (3 Exps): 18–45 years old (median 24 years old), 38 F, 42 M, right-handed, normal or corrected-to-normal vision, no history of movement disorders Amazon's Mechanical Turk Online Exp: Paid $1.50 upon successful completion; received bonus based on performance (max of $1.60) Laboratory Exp: $17/hr 
Kahn et al. (2018Investigates the organizational principles between stimuli and associated dependencies that are encoded by the brain Online: self-paced serial RT task with keyboard Exp 1: 35 min Exp 2: 40 min Exp 3: 60 min N/A Exp 1: 109 Exp 2: 59 Exp 3: 213 Unknown Amazon's Mechanical Turk Exp 1 & 2: up to $7* Exp 3: up to $11* *Dependent on completion and performance 
Kim et al. (2022Examines how the brain computes sensory prediction errors in the absence of movements and how these prediction errors drive adaptation of planned movements Online: reaching task with input device of choice Laboratory: plane reaching task with robot manipulandum Unknown Online & Laboratory: 0.262 Exp 1 (laboratory): 20 Exp 2 (online): 40 Exp 3 (online): 37 Exp 4 (online): 24 Exp 5 (online): 37 18–35 years old, 126 F (of total participants 233*) *Not all 233 participants were included in the analysis Prolific, university pipeline Compensation unknown 
Shyr and Joshi (2021Determines the validity of web-based experimental results for the effect of sensory uncertainty on trial-by-trial adaptation rate Online; cursor-to-target task using a laptop trackpad 35 min 0.261Δ 18 (15 included in analysis, 12 F) (of 18 total participants) 19.8 ± 1.5 years old; 15 F; 17 participants were right-handed, normal or corrected-to-normal vision University pipeline for undergraduate psychology students Course credit 
Tsay, Kim, et al. (2022Evaluates two sources of use-dependent biases in motor execution for reaching movements Online: cursor-to-target task using a laptop trackpad Laboratory: plane reaching task with modified air hockey paddle (embedded stylus) on a tablet Unknown N/A Exp 1 (laboratory): 10 Exp 2 (laboratory): 32 Exp 1 (online with modifications to Exp 1 in the laboratory): 87 Laboratory: (of 42 total participants) 20 ± 2.2 years old; right-handed Online: unknown Unknown recruitment Online: unknown compensation Laboratory: course credit or financial compensation 
PublicationResearch FocusParadigm Description (Laboratory or Online)Experiment Duration (Sigle Session)Maximum Perturbation RatioPerturbation SizeTarget DistanceSubject Population
Sample SizeDemographic InformationRecruitment Method & Compensation
Bönstrup et al. (2020Demonstrates the effect of short rest periods during practice of a procedural motor skill task on consolidation for early skill learning Online: procedural motor skill task with keyboard Exp 1: 12 min Exp 2: 8 min Exp 3: 12 min Exp 4: 12 min N/A Exp 1: 389 Exp 2: 3 groups; 118, 126, 129 participants/group Exp 3: 118 Exp 4: 71 Exp 1: 39.6 ± 0.56 years old; 224 F Exp 2: 38.8 ± 0.65 years old; 239 F Exp 3: 35.4 ± 1.00 years old; 67 F Exp 4: 35.63 ± 1.14 years old; 37 F All: naive, right-handed, >95% approval rate on MTurk, located in the United States Amazon's Mechanical Turk Exp 1, 3, 4: $2 Exp 2: $1.50 *equivalent to $8/hr 
Cesanek et al. (2021Examines how participants learn weights of objects when repeatedly lifted Online: object lifting task with mouse or trackpad Laboratory: object lifting task with robotic manipulandum, virtual reality headset, USB response pad Online: Unknown Laboratory: 45 min N/A Exp (online): 4 groups (37, 36, 37, 39 participants/group) Exp 1 (laboratory): 2 groups; 15 participants/group Exp 2 (laboratory): 3 groups; 9 participants/group Exp 3 (laboratory): 2 groups; 11 participants/group Online (1 Exp): 19–70 years old (median 31.5), 60 F, 135 M, 1 nonbinary Laboratory (3 Exps): 18–45 years old (median 24 years old), 38 F, 42 M, right-handed, normal or corrected-to-normal vision, no history of movement disorders Amazon's Mechanical Turk Online Exp: Paid $1.50 upon successful completion; received bonus based on performance (max of $1.60) Laboratory Exp: $17/hr 
Kahn et al. (2018Investigates the organizational principles between stimuli and associated dependencies that are encoded by the brain Online: self-paced serial RT task with keyboard Exp 1: 35 min Exp 2: 40 min Exp 3: 60 min N/A Exp 1: 109 Exp 2: 59 Exp 3: 213 Unknown Amazon's Mechanical Turk Exp 1 & 2: up to $7* Exp 3: up to $11* *Dependent on completion and performance 
Kim et al. (2022Examines how the brain computes sensory prediction errors in the absence of movements and how these prediction errors drive adaptation of planned movements Online: reaching task with input device of choice Laboratory: plane reaching task with robot manipulandum Unknown Online & Laboratory: 0.262 Exp 1 (laboratory): 20 Exp 2 (online): 40 Exp 3 (online): 37 Exp 4 (online): 24 Exp 5 (online): 37 18–35 years old, 126 F (of total participants 233*) *Not all 233 participants were included in the analysis Prolific, university pipeline Compensation unknown 
Shyr and Joshi (2021Determines the validity of web-based experimental results for the effect of sensory uncertainty on trial-by-trial adaptation rate Online; cursor-to-target task using a laptop trackpad 35 min 0.261Δ 18 (15 included in analysis, 12 F) (of 18 total participants) 19.8 ± 1.5 years old; 15 F; 17 participants were right-handed, normal or corrected-to-normal vision University pipeline for undergraduate psychology students Course credit 
Tsay, Kim, et al. (2022Evaluates two sources of use-dependent biases in motor execution for reaching movements Online: cursor-to-target task using a laptop trackpad Laboratory: plane reaching task with modified air hockey paddle (embedded stylus) on a tablet Unknown N/A Exp 1 (laboratory): 10 Exp 2 (laboratory): 32 Exp 1 (online with modifications to Exp 1 in the laboratory): 87 Laboratory: (of 42 total participants) 20 ± 2.2 years old; right-handed Online: unknown Unknown recruitment Online: unknown compensation Laboratory: course credit or financial compensation 

F = female; M = male; Δ = converted angular perturbation to Euclidian between the cursor endpoint and the feedback.

Although online psychology studies recruiting participants through crowdsourcing platforms have demonstrated higher quality data (evaluated through reproduction of known effects, attention-check accuracy, and dishonesty assessments) when compared with a participant pool recruited through university channels (Peer, Brandimarte, Samat, & Acquisti, 2017), additional investigation is necessary to determine if data quality is maintained in motor learning studies. Motor learning studies often rely heavily on tightly controlled environments and standardized workflows. The aforementioned online motor learning studies show promise in generalizing results across laboratory and online contexts. However, we present our work as a specific case study into the data quality (single trial, aggregated metrics, and over time) associated with online VMR experiments.

The average length of in-laboratory motor learning experiments is difficult to determine from published literature as it is not always reported. However, from available information, duration appears to range anywhere from 5 min to 3.5 hr (Table 1). As for online experiments (Table 2), the experimental duration ranges from 8–12 min up to 60 min and ProA reports the median experiment duration as approximately 10 min (25% of studies are < 5 min, 50% of studies are between 5 and 15 min, and 25% of studies are > 15 min; Tulloch, 2021). Interestingly, both Kim and colleagues (2022) and Cesanek and colleagues (2021) reduced the number of trials between their in-laboratory and online protocol from 480 to 270 and from 320 to 160, respectively, with little explanation. Two studies outside motor learning have conducted longitudinal online studies (Daly & Nataraajan, 2015; Kar, Fang, Delle Fave, Sintov, & Tambe, 2015), but the effect of an extended duration for a single-session experiment remains unclear.

In summary, this study serves as a case study for online VMR experiments conducted on a longer time scale (> 60 min) and our hope is to contribute to the growth of online platforms by discussing the challenges and lessons learned associated with web-based motor learning experiments. We evaluated two sensory uncertainty conditions, single cursor and cursor cloud, where the single cursor condition had no added sensory uncertainty and the cursor cloud condition increased sensory uncertainty by displaying feedback as a five-cursor cloud. Both uncertainty conditions were tested for a range of perturbation sizes (maximum perturbation ratio of 1.075), and, following the REH, we hypothesized that (1) trial-by-trial adaptation rate in response to small (< 30°) perturbations would slow under increased sensory uncertainty and, similarly under reduced motor variability, (2) there would be a nonlinear, sigmoidal relationship between error correction and perturbation under both sensory uncertainty conditions and (3) increasing sensory uncertainty would increase the perception of relevance for larger perturbations sizes, thus extending the linear relationship between error correction and perturbation. Following this reasoning, the hand deviation of the cursor cloud condition should “crossover” that of the single cursor condition as perturbations increase in magnitude (Figure 1).

The entirety of our study was conducted using a remote, web-based application. Eighty-six participants entered our study through the provided link. Thirty participants exited the study before beginning any trials (encouraged if they did not meet hardware specifications), and seven timed out (incomplete; see Experimental Paradigm section for more details). Thus, 47 participants were considered for analysis (two eliminated for poor compliance as described in the Compliance Check section). Table 3 shows the demographic breakdown of our participant population (n = 49) based on handedness, age, ethnicity, race, biological sex, gender identity, video game experience, and video game playing frequency. All participants were informed of and consented to written procedures approved by the institutional review board at the University of California, Davis (Protocol No. 1677528–4). Sample size exceeded previous in-laboratory publications (Wei & Körding, 2009, 2010) and was on par with existing online VMR work (Kim et al., 2022).

Table 3.

Participant Demographics

CharacteristicClassificationPrevalence (%)
Handedness Left 12.2 
Right 87.7 
Age 18–24 69.4 
25–30 20.4 
31–36 4.1 
37–42 2.0 
43–48 
49–54 4.1 
Ethnicity Hispanic or Latino 30.6 
NOT Hispanic or Latino 69.4 
Race American Indian or Alaska Native 6.1 
Black or African American 14.3 
Black or African American & White 4.1 
Unspecified 2.0 
White 73.5 
Biological sex Female 51.0 
Male 49.0 
Gender identity Female 44.9 
Female-to-Male 4.1 
Gender nonconforming 2.0 
Male 49.0 
Recent video game activity (within the last year) No 10.2 
Yes 89.8 
Frequency of video game activity 0 hr/week 10.2 
< 1 hr/week 10.2 
1–5 hr/week 32.7 
6–10 hr/week 20.4 
11–20 hr/week 14.3 
> 20 hr/week 12.2 
CharacteristicClassificationPrevalence (%)
Handedness Left 12.2 
Right 87.7 
Age 18–24 69.4 
25–30 20.4 
31–36 4.1 
37–42 2.0 
43–48 
49–54 4.1 
Ethnicity Hispanic or Latino 30.6 
NOT Hispanic or Latino 69.4 
Race American Indian or Alaska Native 6.1 
Black or African American 14.3 
Black or African American & White 4.1 
Unspecified 2.0 
White 73.5 
Biological sex Female 51.0 
Male 49.0 
Gender identity Female 44.9 
Female-to-Male 4.1 
Gender nonconforming 2.0 
Male 49.0 
Recent video game activity (within the last year) No 10.2 
Yes 89.8 
Frequency of video game activity 0 hr/week 10.2 
< 1 hr/week 10.2 
1–5 hr/week 32.7 
6–10 hr/week 20.4 
11–20 hr/week 14.3 
> 20 hr/week 12.2 

Recruitment for this study utilized ProA, a crowdsourcing platform designed specifically for research. ProA caters to the research community by providing clear guidelines for researchers, specifications for participant payment, and good recruitment standards (Palan & Schitter, 2018). In addition, ProA offers a more diverse population pool in terms of geography and ethnicity over other crowdsourcing platforms (Palan & Schitter, 2018). For our study, we selected the following ProA parameters. We opened recruitment to all countries, balanced our participant pool based on sex, restricted age to 18–60 years (mitigating any extreme age effects), required fluency in English as our consent documents and experiment were administered in English, excluded individuals with long-term health conditions including multiple sclerosis, and selected individuals with either normal or corrected-to-normal vision. These parameters were chosen to ensure we collected data from a broad pool of participants who were healthy and able to fully comprehend the task without unnecessarily excluding a subset of the human population.

Experimental Paradigm

Figure 2 outlines the system architecture. Participants were required to open the study link through a web browser. The client request was then forwarded to Traefik, which was configured as a reverse proxy. Custom code written in JavaScript was deployed through a dedicated server managed by the JATOS tool (Lange, Kühn, & Filevich, 2015) via Docker. JATOS allowed participants to access the study through a single user link, prevented repeat submissions, and stored data accordingly. Push events to Gitlab triggered code updates via webhook. The codebase for the experiment task itself was expanded from an open-source package OnPoint (Tsay, Lee, Ivry, & Avraham, 2021), and the initial version was utilized in Shyr and Joshi (2021). Initial instructions required participants to have access to a laptop with a functioning trackpad and a Google Chrome browser. Participants were instructed to exit the experiment if they could not meet these specifications (e.g., only had access to a desktop with no trackpad) as our paradigm did not track input device information. ProA also did not provide any way to screen participants based on laptop/trackpad usage. However, Kim and colleagues (2022) suggest that input devices (mouse, trackball, or trackpad) have insignificant impact on online VMR experiments. Participants did not receive any repercussions for exiting at this stage.

Figure 2.

System architecture designed for online motor learning research.

Figure 2.

System architecture designed for online motor learning research.

Close modal

In addition to the capabilities enabled by OnPoint (Tsay, Lee, et al., 2021), features implemented in our platform were the automatic reset of the cursor to the start following each trial (reducing visual and proprioceptive feedback during the return process), gamification (modeling the cursor-to-target task as an asteroid-hitting game), built-in timeouts to encourage compliance, and the capability to display cursor feedback as both a single cursor and a five-cursor cloud generated from a two-dimensional Gaussian distribution to impart sensory uncertainty. In addition, our application automatically launched in full-screen mode with the Pointer Lock API (Zolghadr & Ahmed, 2022) enabled. Pointer Lock removed the cursor from view and also eliminated cursor movement limitations at the edge of the viewport (allowing automatic rests of the cursor to the start). Participants were prompted to click a button to re-enter full screen if they accidentally exited it. They could not continue unless full screen was resumed (which subsequently re-enabled pointer lock).

Our experimental paradigm required participants to hit targets with the cursor by performing a swiping motion on the trackpad of their laptop (Figure 3A inset). Figure 3A shows the central elements of the user interface. The cursor was represented as a smiley face, the starting point was represented as an Earth icon, the target was represented as an asteroid icon, and the main task was described as a mission to save Earth from an asteroid by having the participant swipe the smiley face through the asteroid for each trial. The “Trial Counter” tracked trial progress through each block, and “study progress” indicated progress through the study as a whole and was provided as a percentage of study completion. The start was positioned directly in the center of the screen, and each trial began with the cursor centered at the start. The target was set 300 pixels (px) away from the start at an angle of +45° for right-handed participants or +135° for left-handed participants for all trials. All angles follow a counterclockwise convention from the horizontal axis.

Figure 3.

The experimental paradigm: (A) Central elements of the user interface (drawn for clarity). Note that left-handed participants saw the same interface elements, but the target position was mirrored across the vertical axis. The experimental task required participants to perform a swiping motion across the trackpad of their laptops to move the cursor from the start to the target as shown in the inset. (B) A +10° rotation with feedback as a single cursor, and (C) a −10° rotation with feedback displayed as a cursor cloud. Note that the actual cursor position is represented by the transparent smiley face. Endpoint feedback for both conditions could be unperturbed or rotated by ±10°, ±35°, or ± 65° before being displayed. Rotations followed a counterclockwise convention and were measured from the angle of the last recorded cursor position (transparent smiley face) and placed at a distance equivalent to the target distance.

Figure 3.

The experimental paradigm: (A) Central elements of the user interface (drawn for clarity). Note that left-handed participants saw the same interface elements, but the target position was mirrored across the vertical axis. The experimental task required participants to perform a swiping motion across the trackpad of their laptops to move the cursor from the start to the target as shown in the inset. (B) A +10° rotation with feedback as a single cursor, and (C) a −10° rotation with feedback displayed as a cursor cloud. Note that the actual cursor position is represented by the transparent smiley face. Endpoint feedback for both conditions could be unperturbed or rotated by ±10°, ±35°, or ± 65° before being displayed. Rotations followed a counterclockwise convention and were measured from the angle of the last recorded cursor position (transparent smiley face) and placed at a distance equivalent to the target distance.

Close modal

Participants received instructions before each block of trials. The first block directed participants to “quickly move Smiley towards The Asteroid in a swiping motion. Remember, you only need to pass through (not land on) The Asteroid to complete the trial. Focus on hitting the targets as QUICKLY and ACCURATELY as possible.” Every subsequent block included a reminder to continue hitting the targets as “QUICKLY and ACCURATELY as possible” to prevent explicit re-aiming. Instructions were also accompanied with attention checks to ensure compliance. Each attention check required participants to read the instructions and click the correct icon before advancing. If participants were idle for 20 sec, they would receive a 10-sec warning prompting them to make an action after which they would time out of the experiment. Compensation ($12.67 total; $9.50/hr) was provided for participants who successfully completed the experiment or were unable to continue because of technical error (e.g., server crashing).

Feedback was perturbed (in this case, rotated) by ±10°, ±35°, or ±65°. Perturbed trials (75 trials/rotation) were interspersed with nonperturbed trials (75 trials) for 525 total trials. Sensory uncertainty was imparted by providing endpoint feedback as either a single cursor or as a five-cursor cloud and was located at a distance equivalent to the target distance and at an angle equivalent to the last recorded cursor position (plus any relevant rotation). Figure 3 provides examples of a +10° and −10° rotation with feedback provided as a single cursor (Figure 3B) and a cursor cloud (Figure 3C), respectively. The cursor cloud was generated by randomly sampling a two-dimensional, zero-mean Gaussian distribution with a standard deviation of target distance×10°π180°÷1.5 px. The arc distance target distance×10°π180° was used as a near approximation of the straight-line distance between the true cursor position and the rotated cursor position for a 10° rotation. The standard deviation was scaled so that at least 80% of cursor cloud cursor positions were within this distance to aid in visual detection of the perturbation.

Figure 4A provides an overview of the timing information for a representative trial. Trials began when the cursor and target appeared on the user interface. The cursor disappeared from view when the cursor distance exceeded 7% of the target distance and feedback was shown when the cursor distance exceeded 95% of the target distance. Participants were not required to dwell on the target, but simply to pass through it, after which feedback was displayed for 1000 msec. RT was logged as the duration between the appearance of the cursor and target and the detection of the first input. Movement time was the duration between the first input and the last input when the cursor distance exceeded the aforementioned 95% target distance threshold. Movement times greater than 90 msec prompted a “too slow” message, and in movement times less than 30 msec prompted a “too fast” message. Movement time restrictions were set to ensure that participants were not able to receive significant concurrent feedback (primarily visual or proprioceptive with a slower movement; Zelaznik, Hawkins, & Kisselburgh, 1983). Trials outside the 30- to 90-msec range were repeated.

Figure 4.

Trial timing information: (A) The timing information for a typical trial. RT was the time between the appearance of the cursor and target and the first movement. Movement time was the time between the first movement and the point the cursor exceeds 95% of the target distance. Upon movement completion, feedback was displayed for 1000 msec. There was a 500-msec interval between each trial. (B) A more detailed look at the intermediary cursor positions captured during movement time. Intermediary cursor positions were not shown to the participant. Because of the delayed mouse event sampling, feedback is projected at the same angle as the last detected cursor position, but at a distance equivalent to the target distance.

Figure 4.

Trial timing information: (A) The timing information for a typical trial. RT was the time between the appearance of the cursor and target and the first movement. Movement time was the time between the first movement and the point the cursor exceeds 95% of the target distance. Upon movement completion, feedback was displayed for 1000 msec. There was a 500-msec interval between each trial. (B) A more detailed look at the intermediary cursor positions captured during movement time. Intermediary cursor positions were not shown to the participant. Because of the delayed mouse event sampling, feedback is projected at the same angle as the last detected cursor position, but at a distance equivalent to the target distance.

Close modal

Trials were separated by a 500-msec intertrial interval. Following that interval, the cursor reset to the start and the next target would appear. Logging of cursor position was event-based and triggered as per the mousemove event listener. As indicated in the documentation for the mousemove specification (Kacmarcik & Leithead, 2022), the “frequency rate of events while the pointing devices is moved is implementation-, device-, and platform-specific.” The cursor sampling events for a typical trial can be seen in Figure 4B.

Experimental Protocol

Upon providing consent, participants viewed a short instruction video in which they were shown how to set up their workspace. They were asked to place their laptops on a flat surface and to seat themselves directly in front of it. They were then instructed to ensure that they had full wrist support and were able to fully access the trackpad with their dominant hand. They were shown examples of good and bad setups and completed a walkthrough of the task. Material covered included the elements of the user interface, examples of successful/unsuccessful trials, and all potential feedback representations (levels of sensory uncertainty, movement speed warnings, attention checks, etc.).

The experimental protocol took place in phases (Figure 5A) and followed the methodology outlined in Wei and Körding (2010) and trial structure outlined in Tsay, Avraham, and colleagues (2021). First, participants completed a familiarization phase consisting of 10 trials with no feedback followed by a baseline consisting of 20 trials with veridical feedback. Then, participants started the evaluation trials with either the single cursor or cursor cloud as feedback. The evaluation was broken up into three blocks consisting of 175 trials/block (525 trials total). Across all the trials, each of the rotation sizes (±10°, ±35°, ±65°) including 0° (unperturbed trials) were repeated 75 times in a random order. We focused on three rotation sizes (±10°, ±35°, and ±65°) to reduce noncompliance and attention lapses with the longer protocols necessary to evaluate additional perturbation sizes.

Figure 5.

(A) The experimental design. All participants underwent a familiarization block followed by a baseline, evaluation, and postevaluation block. Steps 1–3 were repeated for both sensory uncertainty conditions separated by a washout task. The order of single cursor and cursor cloud sensory uncertainty conditions were counterbalanced. The washout task (B) required participants to search a grid of purple circles for targets (purple circles with orange centers). Participants had to use keyboard inputs and velocity control to drive the cursor spotlight across a black mask to light up small portions of the grid (in this figure, the cursor spotlight is revealing a target). The use of keyboard arrows as an input prevented any additional cursor/trackpad calibration outside the condition blocks. The underlying grid is shown on the right for clarity and was not visible to the participants.

Figure 5.

(A) The experimental design. All participants underwent a familiarization block followed by a baseline, evaluation, and postevaluation block. Steps 1–3 were repeated for both sensory uncertainty conditions separated by a washout task. The order of single cursor and cursor cloud sensory uncertainty conditions were counterbalanced. The washout task (B) required participants to search a grid of purple circles for targets (purple circles with orange centers). Participants had to use keyboard inputs and velocity control to drive the cursor spotlight across a black mask to light up small portions of the grid (in this figure, the cursor spotlight is revealing a target). The use of keyboard arrows as an input prevented any additional cursor/trackpad calibration outside the condition blocks. The underlying grid is shown on the right for clarity and was not visible to the participants.

Close modal

Following the evaluation, participants performed 10 additional trials with no feedback (postevaluation). Separating the two conditions was a washout task. Keyboard inputs served as the only input in the washout task to prevent any additional cursor/trackpad movement calibration. In this task, the cursor functioned as a spotlight that would light up portions of a grid. The task required participants to utilize velocity control to effectively direct the spotlight around the grid using the up and down arrows. Participants were asked to report the number of targets they located after which they received a score based on accuracy. The objective was to count all the circles with orange centers (targets) before the trial time expired. Figure 5B depicts the components of this task. All participants completed a short training on the washout task followed by three search trials. The washout task was not analyzed.

The protocol was then repeated again from the baseline phase (Figure 5A, Step 1) to the postevaluation phase (Figure 5A, Step 3), but with the remaining sensory uncertainty condition. Participants were given intermittent reminders to focus on the screen and not their hand. The protocol lasted, on average, 80 min. Participants saw both sensory uncertainty conditions, and order was randomly assigned ensuring proper counterbalancing. In total, our experiment consisted of two sensory uncertainty conditions and the three rotation sizes resulting in a 2 × 3, within-subject experimental design. Total time for data collection was 12 hr 37 min.

Analysis

Compliance Check

Online studies are particularly susceptible to issues with comprehension, compliance, and motivation because of the limited in-person oversite by researchers. Our experiment implemented several techniques to guard against these challenges. We included survey questions that confirmed whether participants complied with instructions for set up including ensuring wrist/forearm support, keeping their hand/arm in a consistent position, and using only their dominant hand throughout the duration of the task. We tracked whether participants had more than two incorrect attention check responses per sensory uncertainty condition, and we also analyzed the time spent on the video instructions and tracked the participants that did not complete the entirety of the video duration. Of our 49 participants, five participants did not watch the video in its entirety, but all participants showed compliance according to the setup questionnaire and passed our attention requirement.

We also analyzed participant data post hoc and eliminated participants based off two criteria: (1) those who demonstrated a positive correlation (Pearson's correlation coefficient > .10) between hand deviations and rotations (e.g., a positive rotation seen on Trial k led to a positive hand deviation from Trial k to Trial k + 1), which contradicts the concept of error correction and (2) those who did not have > 90% of cursor trajectories within the correct quadrant (Quadrant I for the right-handed participant or Quadrant II for left-handed participants). Criteria 1 was inspired by Cesanek and colleagues (2021) who ensured participants displayed at minimum a mild positive correlation between anticipatory forces exerted in response to the lifting of simulated weights at the close of the training session. Two participants failed Criteria 1, and one participant (who also failed Criteria 1) failed Criteria 2, resulting in the elimination of two participants. Thus, of the 49 participants who completed the task, two were eliminated leaving 47 for analysis.

Motor Adaptation Analyses

Our motor adaptation analyses focused on two primary dependent variables—hand deviation and adaptation rate, as has been used in multiple studies (Tsay, Avraham, et al., 2021; Johnson et al., 2014; Wei & Körding, 2009, 2010). Hand deviation is the difference in hand angle (°) from Trial k to Trial k + 1 (error compensation), and adaptation rate is defined as the slope of the linear regression of the hand deviation with respect to the rotation from Trial k. Note that when a rotation was added to the endpoint feedback in Trial k, the general response was a corrective hand deviation in the opposite direction in Trial k + 1. Thus, a more negative slope indicated a faster adaptation to visual perturbations and the larger the perturbation, the larger the ensuing error correction. We inverted the adaptation rate and reported it as a positive quantity, and unless otherwise specified, variables are reported as mean ± standard error mean.

We conducted a repeated-measures ANOVA on the adaptation rate using R (v. 4.1.1) and the afex package (https://afex.singmann.science/; v. 1.2–1). Effect sizes were calculated as outlined in Lakens (2013). Our ANOVA investigated the effects of Rotation Size and Sensory Uncertainty Conditions across participants. We also included Baseline Motor Variance as a covariate to fully examine our first hypothesis. In addition, we selected Biological Sex as a second covariate to promote the investigation of potential population effect modifiers. Post hoc tests were conducted for both factors and covariates including significant interactions. Biological sex was selected simply because our sample population had near-equal quantities of male versus female participants. Additional factors included in Table 3 were not considered in the analysis because of sample and effect size concerns. We provided this accompanying information to provide a fuller picture of the participant pool and as a reference for future research. We believe that demographics are an important consideration when conducting scientific research and that a more directed effort should be made in equitably representing all members of the general population. Note that the elimination of participants based on compliance criteria did not impact the statistical significance of our results.

In our analysis, rotation size was coded by magnitude (10°, 35°, or 65°) and sensory uncertainty condition as single cursor or cursor cloud. Baseline variance was calculated as the standard deviation of hand angles for all trials following 0° rotation trials in the evaluation phase following (Wei & Körding, 2009). Baseline motor variance in the ANOVA was then categorized as “Low” or “High” based on whether the variance was less than the median or greater than or equal to the median of the population baseline variance. We elected to evaluate naturally occurring motor variance (independent of any experimental manipulations) because of the fact that motor variance cannot easily be engineered experimentally as it typically relies on altering the task environment. Not to mention, inducing uncertainty in a well-developed internal model like directing a cursor on a computer screen is a challenge. Biological sex was either female or male.

As with adaptation rate, we conducted a repeated-measures ANOVA on hand deviation investigating the effects of Rotation Size and Sensory Uncertainty Conditions across participants. Baseline variance and biological sex were also selected as covariates. Post hoc pairwise comparisons were conducted on significant effects and interactions, and Bayes factors were utilized to quantify the strength of our key null findings (Wetzels et al., 2011). We then conducted a curve fit on our hand deviation (Trial k + 1) versus Trial k using a sigmoidal function. Parameters were selected to minimize the condition number of covariance to prevent overfitting. The central purpose of this curve fit was to inform the location of a theoretical “crossover” point.

Last, we ran regression analyses on hand deviation using the SciPy library (Virtanen et al., 2020); v. 1.7.3) and stats subpackage with Python (Python Software Foundation, https://www.python.org/; v. 3.9.1). Regression analyses of hand deviation included evaluating the relationship for hand deviation with respect to rotation size in Trial k. Assumptions of normality were verified through Shapiro–Wilk's test of normality. If the normality assumption was met, paired t tests were conducted to compare results for both sensory uncertainty conditions (α < .05). Otherwise, the Wilcoxon signed-ranks test was used. Our linear regression analysis followed Wei and Körding (2010).

Data Quality Analysis

In addition to providing our motor adaptation analysis, we also used our data set to examine the data quality collected through an online paradigm. For this analysis, we were interested in the effect of running an 80-min experiment online. Thus, we evaluated the temporal behavior of our dependent variable, hand deviation. We conducted an additional ANOVA to evaluate the effect of Trial and Trial Cluster on hand deviation. Rotation sizes were again grouped by magnitude, and hand deviations were adjusted by sign accordingly (i.e., negative rotation sizes in Trial k resulted in a sign switch for the respective hand deviation in Trial k + 1). Trial clusters consisted of 50 trials each, binned temporally. We also discussed variation of data between- and within-subject and the quality of movement information collected per trial through the number of mouse sampling events (per movement)/sampling rate. The number of mouse sampling events was the quantity of cursor positions captured using the mousemove event listener (Kacmarcik & Leithead, 2022) during each trial and allowed us to better understand trajectory information. Sampling rate quantified the mouse event updates per second and was different for each participant as experimental hardware cannot be standardized. Sampling rate was estimated as the number of sampled mouse events divided by the movement time per trial (Hz). Last, we examined trajectory information by analyzing the linearity of each “swipe” by fitting the cursor positions captured in each trial with a linear regression and capturing the coefficient of determination.

Results from our online experiment will cover the (1) replication of in-laboratory findings, (2) effect of induced sensory uncertainty on the REH, and (3) examination of data quality for our 80-min online VMR experiment.

Replication of In-laboratory Results

Results of our ANOVA for adaptation rate yielded a significant effect for both experimental factors, Rotation Size, F(1.14, 49.16) = 53.73, p < .001, ηP2 = .555, 95% CI [0.44, 1.00], ηG2 = .203, and Sensory Uncertainty Condition, F(1, 43) = 6.31, p = .016, ηP2 = .128, 95% CI [0.01, 1.00], ηG2 = .017, as well as two covariates—Biological Sex, F(1, 43) = 7.67, p = .008, ηP2 = .151, 95% CI [0.02, 1.00], ηG2 = .056, and Baseline Motor Variance, F(1, 43) = 9.09, p = .004, ηP2 = .174, 95% CI [0.04, 1.00], ηG2 = .067. Our analysis also showed significant interactions between Sensory Uncertainty Condition and Rotation Size, F(1.13, 48.74) = 8.88, p = .003, ηP2 = .171, 95% CI [0.06, 1.00], ηG2 = .038, and Baseline Motor Variance and Sensory Uncertainty Condition, F(1, 43) = 5.56, p = .023, ηP2 = .114, 95% CI [0.01, 1.00], ηG2 = .015. Table 4 shows our ANOVA results.

Table 4.

ANOVA Results for Adaptation Rate and Hand Deviation

ANOVA Results for Adaptation Rate and Hand Deviation
ANOVA Results for Adaptation Rate and Hand Deviation

Figure 6 highlights the replication of in-laboratory results (Johnson et al., 2014; Wei & Körding, 2010; Burge et al., 2008; Izawa et al., 2008; Robinson et al., 2006) with our online experimental platform. The main effect of Sensory Uncertainty was driven by the 10° rotation, t(43) = 3.050, p = .004, 95% CI [0.017, 0.078], Hedges's gav = 0.5537, as evident by pairwise comparisons between sensory uncertainty condition and rotation size. For the 10° rotation, we found average adaptation rates of 0.138 ± 0.013 and 0.091 ± 0.012 for our single cursor and cursor cloud conditions, respectively, verifying our hypothesis that trial-by-trial adaptation is attenuated by increased sensory uncertainty at small rotation sizes (Figure 6A). After controlling for individual differences, the likelihood that participants had a higher adaptation rate for the single cursor condition than the cursor cloud condition was 67.733% (common language effect size [CLES]).

Figure 6.

Replication of in-laboratory results: Results demonstrated that (A) increased sensory uncertainty attenuates adaptation rate at small (10°) rotation sizes, t(43) = 3.050, p = .004, 95% CI [0.017, 0.078], Hedges's gav = 0.5537, and (B) lower baseline variance (motor variability) decreases adaptation rate for the single cursor condition, t(43) = −3.640, p < .001, 95% CI [−0.068, −0.020], Hedges's gs = 1.037. Both findings follow predictions outlined by the Bayesian framework.

Figure 6.

Replication of in-laboratory results: Results demonstrated that (A) increased sensory uncertainty attenuates adaptation rate at small (10°) rotation sizes, t(43) = 3.050, p = .004, 95% CI [0.017, 0.078], Hedges's gav = 0.5537, and (B) lower baseline variance (motor variability) decreases adaptation rate for the single cursor condition, t(43) = −3.640, p < .001, 95% CI [−0.068, −0.020], Hedges's gs = 1.037. Both findings follow predictions outlined by the Bayesian framework.

Close modal

Similarly, pairwise comparisons between the baseline motor variance and the sensory uncertainty condition showed that the main effect of the Baseline Variance was driven by the single cursor condition, t(43) = −3.640, p < .001, 95% CI [−0.068, −0.020], Hedges's gS = 1.037. For the single cursor condition, we found average adaptation rates of 0.063 ± 0.009 and 0.107 ± 0.008 for our low and high baseline motor variance groups, respectively, verifying our hypothesis that a lower baseline variance (motor variability) when performing the cursor-to-target task led to a lesser willingness to adapt (Figure 6B). There was a 77.356% (CLES) chance that a randomly sampled individual with low baseline motor variance had a lower adaptation rate than a randomly sampled individual with high baseline motor variance. Last, we also found a significant effect of biological sex on adaptation rate. Adaptation rate was 0.091 ± 0.007 for females and 0.064 ± 0.007 for males. The discussion will cover the interpretation and implications for this finding.

Effect of Induced Sensory Uncertainty on the REH Framework

Our results also verified the sigmoidal shape predicted by the REH (Wei & Körding, 2009) for the single cursor condition and verified our second hypothesis that this sigmoidal behavior holds when sensory uncertainty is added. Figure 7A shows the average hand deviation in Trial k + 1 with respect to rotation in Trial k across participants, and Figure 7B provides the best-fit curves. We verified the sublinear behavior through a Wilcoxon signed-ranks test, which demonstrated that the slope of the 10° rotation was significantly different for the slope of all three rotations (10°, 35°, and 65°) for both the single cursor condition, W(46) = 1077.0, p < .001, rrb = 0.9096, and the cursor cloud condition, W(46) = 934.0.0, p < .001, rrb = 0.6560 (Figure 7C). The average slope for the 10° rotation was 0.140 ± 0.014 for the single cursor condition and 0.092 ± 0.012 for the cursor cloud condition. The average slope for all three rotation sizes was 0.050 ± 0.004 for the single cursor and 0.043 ± 0.009 for the cursor cloud. There was a CLES of 80.67% and 69.71% for the single cursor and cursor cloud conditions, respectively, indicating a higher probability that the 10° slope is greater than the slope of all three rotations.

Figure 7.

Effects of induced sensory uncertainty on REH framework: Results in (A) verified that the characteristic sigmoidal shape for hand deviation is shared by both sensory uncertainty conditions, and the corresponding curve fit can be found in (B). (C) provides the regression analyses results. Markedly, the 10° slope is significantly different from the 10°, 35° slope for the single cursor condition, W(46) = 993.0, p < .001, rrb = 0.7606, CLES = 70.484%, but not for the cursor cloud condition, t(46) = 1.653, p = .105, 95% CI [−0.0033, 0.0353], Hedges's gav = 0.2691, CLES = 59.627%. This result suggests that increased sensory uncertainty decreases the likelihood that an error is seen as irrelevant, leading to a more linear response for larger rotation sizes with respect to hand deviation.

Figure 7.

Effects of induced sensory uncertainty on REH framework: Results in (A) verified that the characteristic sigmoidal shape for hand deviation is shared by both sensory uncertainty conditions, and the corresponding curve fit can be found in (B). (C) provides the regression analyses results. Markedly, the 10° slope is significantly different from the 10°, 35° slope for the single cursor condition, W(46) = 993.0, p < .001, rrb = 0.7606, CLES = 70.484%, but not for the cursor cloud condition, t(46) = 1.653, p = .105, 95% CI [−0.0033, 0.0353], Hedges's gav = 0.2691, CLES = 59.627%. This result suggests that increased sensory uncertainty decreases the likelihood that an error is seen as irrelevant, leading to a more linear response for larger rotation sizes with respect to hand deviation.

Close modal

Results of additional linear regression analyses confirmed our third hypothesis that the linear relationship between hand deviation and rotation size was extended when sensory uncertainty was higher. For this analysis, we performed a paired t test or Wilcoxon signed-ranks test (dictated by assumptions of normality) between the slopes of two smallest rotation sizes (10° and 35°) and all three rotation sizes (10°, 35°, and 65°) as well as the two smallest rotation sizes (10° and 35°) and the 10° rotation size. The slope of the two smallest rotation sizes (10° and 35°) and the slope of all rotation sizes (10°, 35°, 65°) were significantly different for both sensory uncertainty conditions (single cursor: W(46) = 1109.0, p < .001, rrb = 0.9663; cursor cloud: W(46) = 1104.0, p < .001, rrb = 0.9574), demonstrating sublinear behavior between the 35° and 65°. The CLES was 70.711% and 70.258% for the single cursor and cursor cloud conditions, respectively, indicating the probability that the slope of the two smallest rotation sizes (10° and 35°) is greater than the slope of all three rotation sizes. Although there was also a difference between the slope for the two smallest rotation sizes (10°, 35°) and the 10° rotation for single cursor, W(46) = 993.0, p < .001, rrb = 0.7606, CLES = 70.484%, there was no significant difference for the cursor cloud, t(46) = 1.653, p = .105, 95% CI [−0.0033, 0.0353], Hedges's gav = 0.2691, CLES = 59.627%, indicating more linear behavior in the cursor cloud condition. These results support the REH that states that increased sensory uncertainty decreases the likelihood that an error is seen as irrelevant leading to a more linear response with respect to hand deviation. Figure 7C summarizes the results of the linear regression analyses.

Last, we found that there was no significant evidence in support of a “crossover” at the larger rotation sizes. We conducted an ANOVA on our hand deviation metrics (see Table 4 for full results). Similarly, there was only a statistically significant difference between both sensory uncertainty conditions for our 10° rotation size, t(43) = 3.048, p = .004, 95% CI [0.158, 0.792], Hedges's gav = 0.5329, CLES = 67.009%, but not our 35°, t(43) = 0.050, p = .960, 95% CI [−0.360, 0.432], Hedges's gav = 0.0252, CLES = 51.065%, or 65°, t(43) = −0.919, p = .363, 95% CI [−0.277, 0.5543], Hedges's gav = 0.0827, CLES = 53.790%, rotation sizes. We used Bayes factors (BF0+) to quantify the strength of our key null findings and found that for both the 35° (BF0+ = 6.2178) and 65° (BF0+ = 5.1675) rotation sizes, there was substantial evidence (Wetzels et al., 2011) for the null hypothesis (no significant difference). Thus, our results indicate the lack of a crossover.

Data Quality for an Online VMR Experiment

Our examination of data quality for our online experiment highlighted some unique challenges. Our experiment had two sensory uncertainty conditions and three perturbations sizes resulting in an 80-min experiment. This is common for in-laboratory experiments, but limited literature exists for any online experiments exceeding 60 min with the average ProA experiment lasting approximately 10 min (Tulloch, 2021). Therefore, it was important to understand any effect of time (i.e., trial number) on our dependent variables. Figure 8A depicts the moving average across participants of the hand deviation over the course of 525 evaluation trials. Hand deviations are shown separately with respect to the rotation seen on Trial k, and sensory uncertainty conditions were separated into two plots (Figure 8A, left shows the single cursor condition, and Figure 8A, right shows the cursor cloud).

Figure 8.

Examination of the effect of trial and trial cluster on hand deviation: (A) shows the moving average of hand deviation across participants over the course of the 525 experimental trials for the single cursor (left) and cursor cloud (right) conditions. There was a significant effect for both Trial, F(148, 6808) = 1.25, p = .023, ηP2 = .026, 95% CI [0.03, 1.00], ηG2 = .004, and Trial Cluster, F(1.65, 75.75) = 6.28, p = .005, ηP2 = .120, 95% CI [0.03, 1.00], ηG2 = .009. (B) and (C) provide the moving average of hand deviation with respect to rotation iteration (75/rotation) for individual participants with low variability and high variability, respectively.

Figure 8.

Examination of the effect of trial and trial cluster on hand deviation: (A) shows the moving average of hand deviation across participants over the course of the 525 experimental trials for the single cursor (left) and cursor cloud (right) conditions. There was a significant effect for both Trial, F(148, 6808) = 1.25, p = .023, ηP2 = .026, 95% CI [0.03, 1.00], ηG2 = .004, and Trial Cluster, F(1.65, 75.75) = 6.28, p = .005, ηP2 = .120, 95% CI [0.03, 1.00], ηG2 = .009. (B) and (C) provide the moving average of hand deviation with respect to rotation iteration (75/rotation) for individual participants with low variability and high variability, respectively.

Close modal

Analysis of hand deviation over trials revealed a potential challenge with administering a lengthy online experiment. An ANOVA indicated a significant effect of Trial, F(148, 6808) = 1.25, p = .023, ηP2 = .026, 95% CI [0.03, 1.00], ηG2 = .004) and Trial Cluster, F(1.65, 75.75) = 6.28, p = .005, ηP2 = .120, 95% CI [0.03, 1.00], ηG2 = .009, on hand deviation. Hand deviations for each rotation size were grouped into three trial clusters accounting for the first third, middle third, and last third of all trials. Cluster 1 yielded an average hand deviation of 2.44 ± 0.20°, whereas Cluster 3 yielded a hand deviation of 2.07 ± 0.14°. This trend is visually evident in Figure 8. Figure 8B and Figure 8C show the hand deviation versus rotation iteration for individual participants with low and high variability, respectively, and display a similar trend. Note that because Figure 8B and Figure 8C are for individual participants, trials are plotted with respect to rotation iteration (75 trials/rotation) rather than with respect to trial over the course of the entire experiment as in Figure 8A. We also noted that hand deviation had significant variability between and within participants. Table 5 provides the statistics (median ± interquartile range [IQR]) for hand deviation for all participants as well as our low and high variability participants. This information should serve as a useful reference for other studies.

Table 5.

Hand Deviation (Median ± IQR)

RotationAll ParticipantsParticipants with Low VariabilityParticipants with High Variability
Single CursorCursor CloudSingle CursorCursor CloudSingle CursorCursor Cloud
−65 2.60 ± 6.29 2.76 ± 6.60 2.48 ± 3.73 3.16 ± 4.07 3.32 ± 12.46 9.44 ± 15.23 
−35 2.59 ± 6.05 2.42 ± 6.35 1.81 ± 3.87 3.32 ± 4.00 3.58 ± 11.18 6.58 ± 10.96 
−10 0.98 ± 6.09 0.53 ± 6.09 −0.92 ± 3.47 0.27 ± 4.60 1.78 ± 13.12 0.14 ± 10.66 
10 −1.37 ± 6.01 −0.99 ± 6.11 −0.33 ± 3.68 −1.38 ± 4.20 −4.00 ± 11.24 −4.06 ± 10.25 
35 −2.38 ± 6.30 −2.09 ± 6.38 −1.55 ± 3.84 −2.09 ± 4.83 −2.28 ± 10.66 −4.78 ± 9.86 
65 −2.24 ± 6.52 −2.45 ± 6.41 −1.03 ± 2.92 −2.84 ± 4.78 −2.24 ± 11.25 −4.23 ± 11.57 
RotationAll ParticipantsParticipants with Low VariabilityParticipants with High Variability
Single CursorCursor CloudSingle CursorCursor CloudSingle CursorCursor Cloud
−65 2.60 ± 6.29 2.76 ± 6.60 2.48 ± 3.73 3.16 ± 4.07 3.32 ± 12.46 9.44 ± 15.23 
−35 2.59 ± 6.05 2.42 ± 6.35 1.81 ± 3.87 3.32 ± 4.00 3.58 ± 11.18 6.58 ± 10.96 
−10 0.98 ± 6.09 0.53 ± 6.09 −0.92 ± 3.47 0.27 ± 4.60 1.78 ± 13.12 0.14 ± 10.66 
10 −1.37 ± 6.01 −0.99 ± 6.11 −0.33 ± 3.68 −1.38 ± 4.20 −4.00 ± 11.24 −4.06 ± 10.25 
35 −2.38 ± 6.30 −2.09 ± 6.38 −1.55 ± 3.84 −2.09 ± 4.83 −2.28 ± 10.66 −4.78 ± 9.86 
65 −2.24 ± 6.52 −2.45 ± 6.41 −1.03 ± 2.92 −2.84 ± 4.78 −2.24 ± 11.25 −4.23 ± 11.57 

To evaluate data quality collected per trial, we examined the number of mouse events detected per movement for each participant. Figure 9A provides two example trials, one with rich cursor trajectory information (left) and one with poor cursor trajectory information (right). Trackpad movements were constrained to 30–90 msec to provide for a natural swipe and to prevent time to gather visual feedback because our paradigm did not allow us to block the participants' hand from view. There were 4.0 ± 2.0 (median ± IQR) samples collected per trial for both sensory uncertainty conditions with 76.30% of trials falling within 3 and 5 number of mouse events (Figure 9B). The sampling rate was 70.42 ± 25.26 Hz (median ± IQR) and had a range of 13.46–207.72 Hz, with 43.27% of sampling rates falling between 55 and 65 Hz (Figure 9C). Movement times (median ± IQR) were 50.30 ± 21.10 msec for the single cursor condition and 50.50 ± 20.90 msec for the cursor cloud condition.

Figure 9.

(A) depicts a trial with high sampling rate (left) and low sampling rate (right). Histograms in (B) and (C) depict the frequency of the number of mouse events sampled per movement and the sampling rate (Hz), respectively, per trial across all participants.

Figure 9.

(A) depicts a trial with high sampling rate (left) and low sampling rate (right). Histograms in (B) and (C) depict the frequency of the number of mouse events sampled per movement and the sampling rate (Hz), respectively, per trial across all participants.

Close modal

Next, we examined cursor trajectory to visualize movement information based on the approximately four cursor positions available per trial. Figure 10 shows cursor trajectories for four example participants. For the most part, participants behaved as expected and had cursor trajectories resembling the “Typical Subject”—demonstrating feasibility of administering cursor-to-target tasks online. All of our participants had average coefficients of determination > 0.99 aside from two participants who both had coefficients of determination > 0.986. Although there were a limited number of cursor positions available to generate the coefficient of determination, the swipe movement pattern generally displayed a straight trajectory. This indicates most participants are likely not re-aiming. Notably, there are a few trials with shorter cursor paths likely caused by a small number of mouse events detected/sampling rate.

Figure 10.

The cursor trajectories of a (A) typical participant, (B) participant with a coefficient of determination that was less than 0.99, (C) participant with acceptable variation, and (D) participant with unacceptable variation. Cursor trajectories are shown for all trials of the single cursor condition.

Figure 10.

The cursor trajectories of a (A) typical participant, (B) participant with a coefficient of determination that was less than 0.99, (C) participant with acceptable variation, and (D) participant with unacceptable variation. Cursor trajectories are shown for all trials of the single cursor condition.

Close modal

Interestingly, this analysis revealed some unexpected variation in cursor swipe direction among our participants. It is probable that some of the swipes outside the expected quadrant (Quadrant I for right-handed participants and Quadrant II for left-handed participants) occurred in response to perturbations or were accidental. However, online paradigms come with the added challenge of enforcing compliance, and it is vital to identify participants who are unable to comprehend and/or comply with experiment instructions. The “Acceptable Variation” participant shows an example of a participant whose data indicate that < 10% of cursor swipes were outside the target quadrant. The “Unacceptable Variation” example shows the case where > 10% of cursor swipes were outside the target quadrant. Further analysis reveals that this participant completed one sensory uncertainty condition successfully (< 10% of cursor swipes outside the target quadrant) but displayed errant behavior in the second condition demonstrating noncompliance. Behavior such as this emphasizes the need for clear comprehension and compliance checks.

The current study demonstrated that data collected from an online visuomotor task was sufficient in identifying existing motor learning phenomena evident in in-laboratory studies. Our adaptation rate for the 10° rotation size was indeed attenuated by sensory uncertainty and, for the single cursor condition, lower motor variability. A potential confounder of our study is that although motor adaptation experiments often exclusively utilize small rotation sizes to isolate implicit adaptation processes, we introduced larger rotation sizes that were more likely to create consciously perceptible errors. As a result, we could have inadvertently introduced explicit learning processes into the motor adaptation task. Importantly, the largest average hand deviations we saw were less than |3°|, which aligned with existing work for both online (Tsay, Lee, et al., 2021) and in-laboratory (Tsay, Haith, Ivry, & Kim, 2022) studies on implicit learning. Explicit learning often results in larger corrective movements and is primarily responsible for driving adaptation for larger rotation sizes (> 30°; Bond & Taylor, 2015). Because our hand deviations were relatively small and saturated for larger rotation sizes, we submit that participants were not primarily utilizing active compensation.

An alternative interpretation of our results could be that the use of a five-cursor cloud changed the participant's ability to perceive error. For example, in the case that feedback position is estimated as the cursor that is closest to the target, there would be an alternative explanation for the decrease seen in adaptation rate. However, work by Burge and colleagues (2008) conducted a visual discrimination experiment showing that there was a linear relationship between the amount of the blur and the ability to localize it visually (just noticeable difference). Thus, implying that participants were not, on average, estimating cursor position from the edge of feedback blurs but rather the center. Therefore, it is probable that sensory uncertainty is the driver in the observed adaptation behavior.

Our results examining the REH under induced sensory uncertainty bolstered findings from Tsay, Avraham, and colleagues (2021) that indicated motor adaptation behavior mimics the sigmoidal shape predicted by the REH under both single cursor and cursor cloud conditions. We also demonstrated that increased sensory uncertainty extends the linear relationship between hand deviation and rotation. However, both Tsay, Avraham, and colleagues (2021) and the current study were unable to identify a “crossover” point between two sensory uncertainty conditions and found that hand deviation converged for both sensory uncertainty conditions at larger rotation sizes. In Tsay, Avraham, and colleagues (2021), they surmised that their 24 participants did not provide enough statistical power to detect the crossover phenomena. Although our study doubled this sample size (n = 47; 49 total), we came to a similar conclusion. A power analysis indicates that 15,174 participants would be required to detect a crossover (α = .05 and power = 0.95) if it exists at the 35° rotation. For the 65° rotation, the required sample size would be 1,197.

In addition to this finding, we also found that our web-based experiment had smaller effect sizes and higher variability relative to a published in-laboratory studies (Tsay, Avraham, et al., 2021; Wei & Körding, 2010), thus likely driving the large sample sizes required above. We computed the Cohen's dz based off paired t tests between the adaptation rates (per sensory uncertainty condition) for each rotation size yielding dz = 0.50, dz = 0.08, and dz = 0.10 for the 10°, 35°, and 65° rotation sizes, respectively. Effect sizes from Tsay, Avraham, and colleagues (2021) were dz = 0.77 for the 10° clamp size and dz < 0.30 for all clamp sizes greater than or equal to 30°. In addition, our current study reported average adaptation rates of 0.140 ± 0.097 and 0.092 ± 0.080 for the single cursor and cursor cloud, respectively, whereas Wei and Körding (2010) reported adaptation rates of 0.233 ± 0.023, 0.178 ± 0.015, and 0.133 ± 0.017 for no blur, small blur, and large blur, respectively (converted to magnitudes for clarity). The no blur condition was equivalent to our single cursor condition and the small, and the large blur were five-cursor clouds with small and large distributions, respectively. Future research endeavors should emphasis the reporting of effect size for motor adaptation studies to verify our effect size observation. However, higher data variability was also seen in other online versus in-laboratory comparisons (Kim et al., 2022; Tsay, Kim, Saxena, et al., 2022a).

Still another reason we were unable to identify a crossover may be the parameters that we selected. Perturbations were based off those in Tsay, Avraham, and colleagues (2021). However, we may not have selected the inflection points (±p1, ±p2, and ±p3 in Figure 1) necessary to observe this effect because we employed a different adaptation experimental paradigm and/or implicit adaptation behavior does not generalize when shifted online. Although this nuance was not a focus of the current study, future work should investigate the parameter differences for modeling motor adaptation behavior collected online and in-laboratory. We were also limited to three perturbation rotation sizes because of concerns about noncompliance or attention lapses with an extended experiment duration. Therefore, it is possible that inflection points are different between individuals and are difficult to identify when evaluating such a limited range of rotation sizes. Our sigmoidal curve fit for hand deviation (Figure 7B) yielded a hypothetical crossover point at 40°. This rotation size could be considered for future studies.

Despite being unable to verify our hypothesis of a crossover, the REH cannot be excluded and is still a valid model based off our results. Another potential candidate could be the proprioceptive realignment model presented by Tsay, Kim, Haith, and colleagues (2022). The proprioceptive realignment model posits that implicit adaptation is driven by proprioceptive error and, therefore, suggests that the upper bound of adaptation occurs when the perceived hand position is aligned with the target. When rotation size is large, the proprioceptive error is saturated and any visual uncertainty has minor impact on behavior. Thus, our results could demonstrate that this proprioceptive saturation point is invariant for either sensory uncertainty condition. Still another favorable model as indicated by Tsay, Avraham, and colleagues (2021) is the motor correction model, which describes a linear relationship between adaptation and small perturbations, but plateaus as perturbations reach the limits of plasticity of the sensorimotor system. Past this saturation point, adaptation begins to decay back to zero.

The contextual inference model also provides a potential explanation for our results and proposes that errors are considered with respect to their context (similar to the idea of relevance) rather than solely single-mechanism sensory cues and has already explained many other motor learning phenomena including savings, the effect of environmental consistency on learning rate, and the distinction between implicit and explicit learning (Heald, Lengyel, & Wolpert, 2021). Last, Albert and colleagues (2021) proposed a model that increased perturbation variability decreases the extent of adaptation (in our case, hand deviation). Our study randomized trial-by-trial perturbations from a uniform distribution of three rotation sizes (not including 0), adding substantially to the perturbation variance experienced by participants, thus potentially further minimizing our effect size and preventing a crossover from being identified.

As previously mentioned, higher data variability could be a result of shifting the experiment online. More generally, this variability could be driven by protocol and experimental design differences (perturbations size(s)), hardware (use of a robot, stylus, or trackpad, etc.). However, the results for our current study also had larger standard error means for adaptation rate than our previous work (Shyr & Joshi, 2021), which also evaluated motor adaptation online. Our previous work reported adaptation rates of 0.168 ± 0.029 and 0.096 ± 0.026 for a small and large five-cursor cloud, respectively (converted to magnitudes for clarity). We propose that a potential explanation could be recruitment method. Both our previous and present studies followed comparable experimental methodologies; however, participants from Shyr and Joshi (2021) were recruited from university pipelines, whereas the current study utilized ProA. Therefore, population differences could contribute to the differences seen in standard error mean. Another web-based motor learning experiment (Bönstrup et al., 2020) similarly noted higher variability of crowdsourced data and attributed it to a “more heterogeneous pool of participants (Stewart, Chandler, & Paolacci, 2017) and a less controlled setting” (Bönstrup et al., 2020).

Moreover, although motor learning research is, perhaps, not as impacted by population-level variation as the social sciences, the central nervous system and resulting motor skills are certainly impacted by other demographic factors like age, gender, and physical activity levels (Jiménez-Jiménez et al., 2011; Rikli & Busch, 1986) and a recent, online, and large-scale study by Tsay and colleagues (2023) indicated that factors like overall enjoyment (1–5 star rating), baseline movement time (median split), visual status (intact or impaired vision), and sex (men or women) predicted early and late adaptation whereas baseline variability (greater or less), level of education (more or less), baseline return time (faster or slower), and target location (diagonal or cardinal) served as predictors for motor aftereffect. Thus, in addition to the conveniences of fast recruitment, broadening the population is imperative for ensuring that motor learning research is accounting for potential variations among individuals, and certainly could drive the elevated variability we see in our data.

Our results did indicate a significant effect of biological sex for adaptation rate. However, we caution potential interpretations of this result, and we are not claiming any specific motor learning difference between females and males. Using the reported effect size for biological sex (ηP2 = .151), a replication study would require a sample size of 114 to achieve a statistically significant effect with an α = .05 and power = 0.95. If the experimental design utilized a single rotation size (i.e., 10°), a sample size of 76 would be necessary (α = .05, power = 0.95). Results in support of a sex-driven difference would indicate that generalizations cannot be made across sexes for similar VMR tasks and could lead to additional implications for other motor-learning-based applications such as rehabilitation. Such claims would require a tailored experimental design focusing on biological sex as the main factor, a task reflecting the specific motor learning process of interest, and an unbiased and diverse sample population. The authors caution against unfounded interpretations that could cause harmful and inaccurate claims of one sex having superior or inferior motor adaptation capabilities over the other. We report this result to highlight the importance of considering participant demographics in scientific research. However, it should be noted that although crowdsourcing platforms are more diverse than the typical undergraduate population, they are not representative of the general population (Paolacci & Chandler, 2014). Therefore, online crowdsourced data are certainly also susceptible to bias as evident in recent algorithms developed using online data (Angwin, Larson, Mattu, & Kirchner, 2016; Carpenter, 2015).

Yet another explanation for high variability could be compromised data quality as a result of compliance/attention dips because of the lack of oversite during online experimental sessions, lengthy experiment durations, and potentially a limited capability to capture rich experimental information with nonsystematic hardware. That being said, MTurk and ProA have demonstrated higher quality data (evaluated through reproduction of known effects, attention-check accuracy, and dishonesty assessments) when compared with a participant pool recruited through university channels (Peer et al., 2017) and subsequent studies have supported generalization of motor learning phenomena across online and in-laboratory contexts (Kim et al., 2022; Cesanek et al., 2021; Bönstrup et al., 2020; Kahn et al., 2018). However, there has been little discussion on the data quality associated with motor learning studies—particularly for longer-duration (> 60 min) experiments.

In our case study, we examined the data quality collected in our 80-min experiment. We found statistically significant effects of trial and trial cluster on our dependent variable, hand deviation. These effects could be a result of fatigue or loss of motivation. Thus, carefully designed compensation schemes could help alleviate these effects (Cesanek et al., 2021; Kahn et al., 2018). We conducted two additional ANOVAs on hand deviation using either the first half of the trials or the second half of the trials. The results for the second half of trials notably did not indicate a significant interaction between sensory uncertainty condition and rotation size and suggest exceeding 40 min (half way through our methodology) for an experimental protocol should be carefully considered.

However, aside from two rogue participants we removed from the analysis, we did not see any direct evidence of compliance drops or fatigue such as excessive trial repeats, movement violations, or significantly longer RTs to indicate participants are taking a break in the task. Intriguingly, recent work has also shown an unexpected effect of relearning on implicit adaptation. Avraham, Morehead, Kim, and Ivry (2021) evaluated motor learning using a methodology that isolates implicit adaptation and found reduced adaptation function and aftereffect magnitudes when relearning following a no-feedback and washout phase. Importantly, results were compared with a second group, which did not have a no-feedback or washout phase at the midpoint of the experiment duration and found that there was no attenuation of learning. Therefore, we suggest that a factor of this attenuation could be because of relearning rather than fatigue or habituation.

To control for potential temporal effects, we advise future work separate studies exceeding 40 min into multiple sessions, which could be favorable as longitudinal studies conducted online have shown some success (Daly & Nataraajan, 2015; Kar et al., 2015). A shift of paradigm may also be favorable as, aside from written warnings, there is nothing in this experiment preventing participants from looking down at their hand during movement, which may cause them to begin attributing displayed errors to the interface, thus potentially leading to diminished effort and/or altered motor behavior. With few options to address this remotely, future studies could consider an error-clamp method similar to Tsay, Avraham, and colleagues (2021) that isolates implicit adaptation of participants by informing them of induced error but asks them to ignore it.

Finding ways to standardize hardware is certainly a challenge in the online setting. However, based on lessons learned, we suggest tracking screen size (height and width) to conduct an analysis on the effect of screen size on experimental outcomes. Sampling rate differences should also be considered. Sampling rates for our study were similar to Kim and others (2022). However, because of short movement times, cursor trajectory information was limited for some trials (i.e., only 1 or 2 points to plot trajectory and add rotation). We found a moderate positive correlation (Pearson correlation coefficient, r > .60) between the movement time and the number of samples for both sensory uncertainty conditions (single cursor: r = .625; cursor cloud: r = .634), and therefore suggest that future studies consider increasing the mandatory movement time range to allow for the collection of richer trajectory and movement information. Three or more points would be preferred for tracking the cursor path to better indicate linearity. In addition, a dedicated effort should be put into optimizing the frequency rate to prevent a mismatch between responsiveness and performance causing variance between participants.

In summary, the contributions of this article are threefold. First, we demonstrated that motor adaptation behavior generalizes to online experimental platforms by replicating in-laboratory results. Second, we revealed that the REH holds up with added sensory uncertainty indicating the likelihood of error relevance playing a key role in motor adaptation. However, other motor learning models should be taken into consideration as they may describe human motor adaptation process more fully. Last, data quality in online studies is promising, but may be susceptible to appreciable fatigue, loss of compliance, or effects because of relearning, and may struggle with elevated participant variability and diminished effect sizes. Special considerations must also be made for nonsystematic hardware, and an emphasis must be placed on the reporting of generalized effect size metrics or sharing of data to allow for more in-depth meta-analyses of motor adaptation studies. All in all, future implementations should carefully consider the following: the power necessary to significantly detect the effect of interest, a method to determine and drive compliance or mitigate relearning effects, and techniques to optimize hardware-driven performance differences. Our results provide robust effect size information that will inform future online motor learning research. With these considerations, we certainly demonstrate that online platforms have exciting potential in specific experimental conditions.

The authors would like to thank Jason Dekarske for his contributions to the development and deployment of our remote, web-based platform. We thank Kenneth R. Lyons for his mentorship and invaluable insights as well as Barbara S. Linke and Stephen K. Robinson for their helpful comments. Last, we thank Jonathan S. Tsay, Richard B. Ivry, and colleagues for providing their input, inspiration, and open-source software (Tsay, Lee, et al., 2021) on which we built our platform.

Corresponding author: Megan C. Shyr, 250 W El Camino Real Apt. 1113, Sunnyvale, CA 94087, or via e-mail: mcshyr@ucdavis.edu.

Please contact authors for data access.

Megan C. Shyr: Conceptualization; Data curation; Formal analysis; Investigation; Methodology; Project administration; Resources; Software; Validation; Visualization; Writing––Original draft; Writing––Review & editing. Sanjay S. Joshi: Conceptualization; Funding acquisition; Project administration; Resources; Supervision; Validation; Writing––Review & editing.

National Science Foundation (https://dx.doi.org/10.13039/100000001), grant number: 1934792.

A retrospective analysis of the citations in every article published in this journal from 2010 to 2020 has revealed a persistent pattern of gender imbalance: Although the proportions of authorship teams (categorized by estimated gender identification of first author/last author) publishing in the Journal of Cognitive Neuroscience (JoCN) during this period were M(an)/M = .408, W(oman)/M = .335, M/W = .108, and W/W = .149, the comparable proportions for the articles that these authorship teams cited were M/M = .579, W/M = .243, M/W = .102, and W/W = .076 (Fulvio et al., JoCN, 33:1, pp. 3–7). Consequently, JoCN encourages all authors to consider gender balance explicitly when selecting which articles to cite and gives them the opportunity to report their article's gender citation balance. The authors of this paper report its proportions of citations by gender category to be: M/M = .794; W/M = .147; M/W = .059; W/W = 0.

Albert
,
S. T.
,
Jang
,
J.
,
Sheahan
,
H. R.
,
Teunissen
,
L.
,
Vandevoorde
,
K.
,
Herzfeld
,
D. J.
, et al
(
2021
).
An implicit memory of errors limits human sensorimotor adaptation
.
Nature Human Behaviour
,
5
,
920
934
. ,
[PubMed]
Avraham
,
G.
,
Morehead
,
J. R.
,
Kim
,
H. E.
, &
Ivry
,
R. B.
(
2021
).
Reexposure to a sensorimotor perturbation produces opposite effects on explicit and implicit learning processes
.
PLoS Biology
,
19
,
e3001147
. ,
[PubMed]
Berniker
,
M.
, &
Kording
,
K.
(
2008
).
Estimating the sources of motor errors for adaptation and generalization
.
Nature Neuroscience
,
11
,
1454
1461
. ,
[PubMed]
Blustein
,
D.
,
Shehata
,
A.
,
Englehart
,
K.
, &
Sensinger
,
J.
(
2018
).
Conventional analysis of trial-by-trial adaptation is biased: Empirical and theoretical support using a Bayesian estimator
.
PLoS Computational Biology
,
14
,
e1006501
. ,
[PubMed]
Bond
,
K. M.
, &
Taylor
,
J. A.
(
2015
).
Flexible explicit but rigid implicit learning in a visuomotor adaptation task
.
Journal of Neurophysiology
,
113
,
3836
3849
. ,
[PubMed]
Bönstrup
,
M.
,
Iturrate
,
I.
,
Hebart
,
M. N.
,
Censor
,
N.
, &
Cohen
,
L. G.
(
2020
).
Mechanisms of offline motor learning at a microscale of seconds in large-scale crowdsourced data
.
NPJ Science of Learning
,
5
,
7
. ,
[PubMed]
Burge
,
J.
,
Ernst
,
M. O.
, &
Banks
,
M. S.
(
2008
).
The statistical determinants of adaptation rate in human reaching
.
Journal of Vision
,
8
,
20.1
20.19
. ,
[PubMed]
Carpenter
,
J.
(
2015
).
Google's algorithm shows prestigious job ads to men, but not to women
.
The Independent
. https://www.independent.co.uk/tech/google-s-algorithm-shows-prestigious-job-ads-to-men-but-not-to-women-10372166.html
Cesanek
,
E.
,
Zhang
,
Z.
,
Ingram
,
J. N.
,
Wolpert
,
D. M.
, &
Flanagan
,
J. R.
(
2021
).
Motor memories of object dynamics are categorically organized
.
eLife
,
10
,
e71627
. ,
[PubMed]
Crump
,
M. J. C.
,
McDonnell
,
J. V.
, &
Gureckis
,
T. M.
(
2013
).
Evaluating Amazon's Mechanical Turk as a tool for experimental behavioral research
.
PLoS One
,
8
,
e57410
. ,
[PubMed]
Daly
,
T. M.
, &
Nataraajan
,
R.
(
2015
).
Swapping bricks for clicks: Crowdsourcing longitudinal data on Amazon Turk
.
Journal of Business Research
,
68
,
2603
2609
.
Gonzalez Castro
,
L. N.
,
Hadjiosif
,
A. M.
,
Hemphill
,
M. A.
, &
Smith
,
M. A.
(
2014
).
Environmental consistency determines the rate of motor adaptation
.
Current Biology
,
24
,
1050
1061
. ,
[PubMed]
He
,
K.
,
Liang
,
Y.
,
Abdollahi
,
F.
,
Bittmann
,
M. F.
,
Kording
,
K.
, &
Wei
,
K.
(
2016
).
The statistical determinants of the speed of motor learning
.
PLoS Computational Biology
,
12
,
e1005023
. ,
[PubMed]
Heald
,
J. B.
,
Lengyel
,
M.
, &
Wolpert
,
D. M.
(
2021
).
Contextual inference underlies the learning of sensorimotor repertoires
.
Nature
,
600
,
489
493
. ,
[PubMed]
Izawa
,
J.
,
Rane
,
T.
,
Donchin
,
O.
, &
Shadmehr
,
R.
(
2008
).
Motor adaptation as a process of reoptimization
.
Journal of Neuroscience
,
28
,
2883
2891
. ,
[PubMed]
Izawa
,
J.
, &
Shadmehr
,
R.
(
2011
).
Learning from sensory and reward prediction errors during motor adaptation
.
PLoS Computational Biology
,
7
,
e1002012
. ,
[PubMed]
Jiménez-Jiménez
,
F. J.
,
Calleja
,
M.
,
Alonso-Navarro
,
H.
,
Rubio
,
L.
,
Navacerrada
,
F.
,
Pilo-de-la-Fuente
,
B.
, et al
(
2011
).
Influence of age and gender in motor performance in healthy subjects
.
Journal of the Neurological Sciences
,
302
,
72
80
. ,
[PubMed]
Johnson
,
R. E.
,
Kording
,
K. P.
,
Hargrove
,
L. J.
, &
Sensinger
,
J. W.
(
2014
).
Does EMG control lead to distinct motor adaptation?
Frontiers in Neuroscience
,
8
,
302
.
Kacmarcik
,
G.
, &
Leithead
,
T.
(
2022
).
UI events
. https://w3c.github.io/uievents/#event-type-mousemove
Kahn
,
A. E.
,
Karuza
,
E. A.
,
Vettel
,
J. M.
, &
Bassett
,
D. S.
(
2018
).
Network constraints on learnability of probabilistic motor sequences
.
Nature Human Behaviour
,
2
,
936
947
. ,
[PubMed]
Kar
,
D.
,
Fang
,
F.
,
Delle Fave
,
F.
,
Sintov
,
N.
, &
Tambe
,
M.
(
2015
).
“A Game of Thrones”: When human behavior models compete in repeated Stackelberg security games
. In
Proceedings of the 2015 international conference on autonomous agents and multiagent systems
(pp.
1381
1390
).
Kim
,
O. A.
,
Forrence
,
A. D.
, &
McDougle
,
S. D.
(
2022
).
Motor learning without movement
.
Proceedings of the National Academy of Sciences, U.S.A.
,
119
,
e2204379119
. ,
[PubMed]
Körding
,
K. P.
, &
Wolpert
,
D. M.
(
2004
).
Bayesian integration in sensorimotor learning
.
Nature
,
427
,
244
247
. ,
[PubMed]
Lakens
,
D.
(
2013
).
Calculating and reporting effect sizes to facilitate cumulative science: A practical primer for t Tests and ANOVAs
.
Frontiers in Psychology
,
4
,
863
. ,
[PubMed]
Lange
,
K.
,
Kühn
,
S.
, &
Filevich
,
E.
(
2015
).
“Just Another Tool for Online Studies” (JATOS): An easy solution for setup and management of web servers supporting online studies
.
PLoS One
,
10
,
e0130834
. ,
[PubMed]
Lyons
,
K. R.
, &
Joshi
,
S. S.
(
2018
).
Effects of mapping uncertainty on visuomotor adaptation to trial-by-trial perturbations with proportional myoelectric control
. In
2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)
(pp.
5178
5181
).
Honolulu, HI
:
IEEE
.
Palan
,
S.
, &
Schitter
,
C.
(
2018
).
Prolific.ac—A subject pool for online experiments
.
Journal of Behavioral and Experimental Finance
,
17
,
22
27
.
Paolacci
,
G.
, &
Chandler
,
J.
(
2014
).
Inside the Turk: Understanding Mechanical Turk as a participant pool
.
Current Directions in Psychological Science
,
23
,
184
188
.
Peer
,
E.
,
Brandimarte
,
L.
,
Samat
,
S.
, &
Acquisti
,
A.
(
2017
).
Beyond the Turk: Alternative platforms for crowdsourcing behavioral research
.
Journal of Experimental Social Psychology
,
70
,
153
163
.
Rikli
,
R.
, &
Busch
,
S.
(
1986
).
Motor performance of women as a function of age and physical activity level
.
Journal of Gerontology
,
41
,
645
649
. ,
[PubMed]
Robinson
,
F. R.
,
Soetedjo
,
R.
, &
Noto
,
C.
(
2006
).
Distinct short-term and long-term adaptation to reduce saccade size in monkey
.
Journal of Neurophysiology
,
96
,
1030
1041
. ,
[PubMed]
Shyr
,
M. C.
, &
Joshi
,
S. S.
(
2021
).
Validation of the Bayesian sensory uncertainty model of motor adaptation with a remote experimental paradigm
. In
2021 IEEE 2nd International Conference on Human–Machine Systems (ICHMS)
(pp.
1
6
).
Stewart
,
N.
,
Chandler
,
J.
, &
Paolacci
,
G.
(
2017
).
Crowdsourcing samples in cognitive science
.
Trends in Cognitive Sciences
,
21
,
736
748
. ,
[PubMed]
Tsay
,
J. S.
,
Asmerian
,
H.
,
Germine
,
L. T.
,
Wilmer
,
J.
,
Ivry
,
R. B.
, &
Nakayama
,
K.
(
2023
).
Predictors of sensorimotor adaption: Insights from over 100,000 reaches
.
bioRxiv
.
Tsay
,
J. S.
,
Avraham
,
G.
,
Kim
,
H. E.
,
Parvin
,
D. E.
,
Wang
,
Z.
, &
Ivry
,
R. B.
(
2021
).
The effect of visual uncertainty on implicit motor adaptation
.
Journal of Neurophysiology
,
125
,
12
22
. ,
[PubMed]
Tsay
,
J. S.
,
Haith
,
A. M.
,
Ivry
,
R. B.
, &
Kim
,
H. E.
(
2022
).
Interactions between sensory prediction error and task error during implicit motor learning
.
PLoS Computational Biology
,
18
,
e1010005
. ,
[PubMed]
Tsay
,
J. S.
,
Kim
,
H.
,
Haith
,
A. M.
, &
Ivry
,
R. B.
(
2022
).
Understanding implicit sensorimotor adaptation as a process of proprioceptive re-alignment
.
eLife
,
11
,
e76639
. ,
[PubMed]
Tsay
,
J. S.
,
Kim
,
H. E.
,
Saxena
,
A.
,
Parvin
,
D. E.
,
Verstynen
,
T.
, &
Ivry
,
R. B.
(
2022a
).
Dissociable use-dependent processes for volitional goal-directed reaching
.
Proceedings of the Royal Society B: Biological Sciences
,
289
,
20220415
. ,
[PubMed]
Tsay
,
J. S.
,
Kim
,
H. E.
,
Saxena
,
A.
,
Parvin
,
D. E.
,
Verstynen
,
T.
, &
Ivry
,
R. B.
(
2022b
).
Supplementary material from “Dissociable use-dependent processes for volitional goal-directed reaching.”
The Royal Society. Collection
.
Tsay
,
J. S.
,
Lee
,
A.
,
Ivry
,
R. B.
, &
Avraham
,
G.
(
2021
).
Moving outside the lab: The viability of conducting sensorimotor learning studies online
.
Neurons, Behavior, Data Analysis, and Theory
,
5
,
1
22
.
Tulloch
,
J.
(
2021
).
Length of studies
.
Prolific Researcher Community
. https://community.prolific.co/t/length-of-studies/938
Virtanen
,
P.
,
Gommers
,
R.
,
Oliphant
,
T. E.
,
Haberland
,
M.
,
Reddy
,
T.
,
Cournapeau
,
D.
, et al
(
2020
).
SciPy 1.0: Fundamental algorithms for scientific computing in Python
.
Nature Methods
,
17
,
261
272
. ,
[PubMed]
Wang
,
T.
,
Avraham
,
G.
,
Tsay
,
J. S.
,
Thummala
,
T.
, &
Ivry
,
R. B.
(
2022
).
Advanced feedback enhances sensorimotor adaptation
.
bioRxiv
.
Wei
,
K.
, &
Körding
,
K.
(
2009
).
Relevance of error: What drives motor adaptation?
Journal of Neurophysiology
,
101
,
655
664
. ,
[PubMed]
Wei
,
K.
, &
Körding
,
K.
(
2010
).
Uncertainty of feedback and state estimation determines the speed of motor adaptation
.
Frontiers in Computational Neuroscience
,
4
,
11
. ,
[PubMed]
Wetzels
,
R.
,
Matzke
,
D.
,
Lee
,
M. D.
,
Rouder
,
J. N.
,
Iverson
,
G. J.
, &
Wagenmakers
,
E.-J.
(
2011
).
Statistical evidence in experimental psychology: An empirical comparison using 855 t tests
.
Perspectives on Psychological Science
,
6
,
291
298
. ,
[PubMed]
Wolpert
,
D. M.
, &
Ghahramani
,
Z.
(
2000
).
Computational principles of movement neuroscience
.
Nature Neuroscience
,
3
,
1212
1217
. ,
[PubMed]
Zelaznik
,
H. Z.
,
Hawkins
,
B.
, &
Kisselburgh
,
L.
(
1983
).
Rapid visual feedback processing in single-aiming movements
.
Journal of Motor Behavior
,
15
,
217
236
. ,
[PubMed]
Zolghadr
,
N.
, &
Ahmed
,
M.
(
2022
).
Pointer Lock 2.0
.
W3C Editor's Draft
. https://w3c.github.io/pointerlock/#dom-element-requestpointerlock
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.