Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-4 of 4
Frank H. Guenther
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2015) 27 (4): 819–831.
Published: 01 April 2015
FIGURES
| View All (5)
Abstract
View article
PDF
Speech is perhaps the most sophisticated example of a species-wide movement capability in the animal kingdom, requiring split-second sequencing of approximately 100 muscles in the respiratory, laryngeal, and oral movement systems. Despite the unique role speech plays in human interaction and the debilitating impact of its disruption, little is known about the neural mechanisms underlying speech motor learning. Here, we studied the behavioral and neural correlates of learning new speech motor sequences. Participants repeatedly produced novel, meaningless syllables comprising illegal consonant clusters (e.g., GVAZF) over 2 days of practice. Following practice, participants produced the sequences with fewer errors and shorter durations, indicative of motor learning. Using fMRI, we compared brain activity during production of the learned illegal sequences and novel illegal sequences. Greater activity was noted during production of novel sequences in brain regions linked to non-speech motor sequence learning, including the BG and pre-SMA. Activity during novel sequence production was also greater in brain regions associated with learning and maintaining speech motor programs, including lateral premotor cortex, frontal operculum, and posterior superior temporal cortex. Measures of learning success correlated positively with activity in left frontal operculum and white matter integrity under left posterior superior temporal sulcus. These findings indicate speech motor sequence learning relies not only on brain areas involved generally in motor sequencing learning but also those associated with feedback-based speech motor learning. Furthermore, learning success is modulated by the integrity of structural connectivity between these motor and sensory brain regions.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2010) 22 (7): 1504–1529.
Published: 01 July 2010
FIGURES
| View All (11)
Abstract
View article
PDF
Speakers plan the phonological content of their utterances before their release as speech motor acts. Using a finite alphabet of learned phonemes and a relatively small number of syllable structures, speakers are able to rapidly plan and produce arbitrary syllable sequences that fall within the rules of their language. The class of computational models of sequence planning and performance termed competitive queuing models have followed K. S. Lashley [The problem of serial order in behavior. In L. A. Jeffress (Ed.), Cerebral mechanisms in behavior (pp. 112–136). New York: Wiley, 1951] in assuming that inherently parallel neural representations underlie serial action, and this idea is increasingly supported by experimental evidence. In this article, we developed a neural model that extends the existing DIVA model of speech production in two complementary ways. The new model includes paired structure and content subsystems [cf. MacNeilage, P. F. The frame/content theory of evolution of speech production. Behavioral and Brain Sciences, 21, 499–511, 1998 ] that provide parallel representations of a forthcoming speech plan as well as mechanisms for interfacing these phonological planning representations with learned sensorimotor programs to enable stepping through multisyllabic speech plans. On the basis of previous reports, the model's components are hypothesized to be localized to specific cortical and subcortical structures, including the left inferior frontal sulcus, the medial premotor cortex, the basal ganglia, and the thalamus. The new model, called gradient order DIVA, thus fills a void in current speech research by providing formal mechanistic hypotheses about both phonological and phonetic processes that are grounded by neuroanatomy and physiology. This framework also generates predictions that can be tested in future neuroimaging and clinical case studies.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (1994) 6 (4): 341–358.
Published: 01 July 1994
Abstract
View article
PDF
A neural model is described of how the brain may autonomously learn a body-centered representation of a three-dimensional (3-D) target position by combining information about retinal target position, eye position, and head position in real time. Such a body-centered spatial representation enables accurate movement commands to the limbs to be generated despite changes in the spatial relationships between the eyes, head, body, and limbs through time. The model learns a vector representation—otherwise known as a parcellated distributed representation—of target vergence with respect to the two eyes, and of the horizontal and vertical spherical angles of the target with respect to a cyclopean egocenter. Such a vergence-spherical representation has been reported in the caudal midbrain and medulla of the frog, as well as in psychophysical movement studies in humans. A head-centered vergence-spherical representation of foveated target position can be generated by two stages of opponent processing that combine corollary discharges of outflow movement signals to the two eyes. Sums and differences of opponent signals define angular and vergence coordinates, respectively. The head-centered representation interacts with a binocular visual representation of nonfoveated target position to learn a visuomotor representation of both foveated and nonfoveated target position that is capable of commanding yoked eye movements. This head-centered vector representation also interacts with representations of neck movement commands to learn a body-centered estimate of target position that is capable of Commanding coordinated arm movements. Learning occurs during head movements made while gaze remains fixed on a foveated target. An initial estimate is stored and a VOR-mediated gating signal prevents the stored estimate from being reset during a gaze-maintaining head movement. As the head moves, new estimates are compared with the stored estimate to compute difference vectors which act as error signals that drive the learning process, as well as control the on-line merging of multimodal information.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (1993) 5 (4): 408–435.
Published: 01 October 1993
Abstract
View article
PDF
This paper describes a self-organizing neural model for eye-hand coordination. Called the DIRECT model, it embodies a solution of the classical motor equivalence problem. Motor equivalence computations allow humans and other animals to flexibly employ an arm with more degrees of freedom than the space in which it moves to carry out spatially defined tasks under conditions that may require novel joint configurations. During a motor babbling phase, the model endogenously generates movement commands that activate the correlated visual, spatial, and motor information that are used to learn its internal coordinate transformations. After learning occurs, the model is capable of controlling reaching movements of the arm to prescribed spatial targets using many different combinations of joints. When allowed visual feedback, the model can automatically perform, without additional learning, reaches with tools of variable lengths, with clamped joints, with distortions of visual input by a prism, and with unexpected perturbations. These compensatory computations occur within a single accurate reaching movement. No corrective movements are needed. Blind reaches using internal feedback have also been simulated. The model achieves its competence by transforming visual information about target position and end effector position in 3-D space into a body-centered spatial representation of the direction in 3-D space that the end effector must move to contact the target. The spatial direction vector is adaptively transformed into a motor direction vector, which represents the joint rotations that move the end effector in the desired spatial direction from the present arm configuration. Properties of the model are compared with psychophysical data on human reaching movements, neurophysiological data on the tuning curves of neurons in the monkey motor cortex, and alternative models of movement control.