Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-2 of 2
Tracy M. Centanni
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Neurobiology of Language (2023) 4 (4): 639–655.
Published: 14 December 2023
FIGURES
| View All (6)
Abstract
View articletitled, Neural Specialization for English and Arabic Print in Early Readers
View
PDF
for article titled, Neural Specialization for English and Arabic Print in Early Readers
Learning to read requires the specialization of a region in the left fusiform gyrus known as the visual word form area (VWFA). This region, which initially responds to faces and objects, develops specificity for print over a long trajectory of instruction and practice. VWFA neurons may be primed for print because of their pre-literate tuning properties, becoming specialized through top-down feedback mechanisms during learning. However, much of what is known about the VWFA comes from studies of Western orthographies, whose alphabets share common visual characteristics. Far less is known about the development of the VWFA for Arabic, which is a complex orthography and is significantly more difficult to achieve fluency in in reading. In the current study, electroencephalography responses were collected from first grade children in the United Arab Emirates learning to read in both English and Arabic. Children viewed words and false font strings in English and Arabic while performing a vigilance task. The P1 and N1 responses to all stimulus categories were quantified in two occipital and two parietal electrodes as well as the alpha band signal across all four electrodes of interest. Analysis revealed a significantly stronger N1 response to English compared to Arabic and decreased alpha power to Arabic compared to English. These findings suggest a fundamental difference in neural plasticity for these two distinct orthographies, even when instruction is concurrent. Future work is needed to determine whether VWFA specialization for Arabic takes longer than more well-studied orthographies and if differences in reading instruction approaches help accelerate this process.
Includes: Supplementary data
Journal Articles
Publisher: Journals Gateway
Neurobiology of Language (2021) 2 (2): 254–279.
Published: 07 May 2021
FIGURES
| View All (5)
Abstract
View articletitled, Neural Decoding Reveals Concurrent Phonemic and Subphonemic Representations of Speech Across Tasks
View
PDF
for article titled, Neural Decoding Reveals Concurrent Phonemic and Subphonemic Representations of Speech Across Tasks
Robust and efficient speech perception relies on the interpretation of acoustically variable phoneme realizations, yet prior neuroimaging studies are inconclusive regarding the degree to which subphonemic detail is maintained over time as categorical representations arise. It is also unknown whether this depends on the demands of the listening task. We addressed these questions by using neural decoding to quantify the (dis)similarity of brain response patterns evoked during two different tasks. We recorded magnetoencephalography (MEG) as adult participants heard isolated, randomized tokens from a /ba/-/da/ speech continuum. In the passive task, their attention was diverted. In the active task, they categorized each token as ba or da . We found that linear classifiers successfully decoded ba vs. da perception from the MEG data. Data from the left hemisphere were sufficient to decode the percept early in the trial, while the right hemisphere was necessary but not sufficient for decoding at later time points. We also decoded stimulus representations and found that they were maintained longer in the active task than in the passive task; however, these representations did not pattern more like discrete phonemes when an active categorical response was required. Instead, in both tasks, early phonemic patterns gave way to a representation of stimulus ambiguity that coincided in time with reliable percept decoding. Our results suggest that the categorization process does not require the loss of subphonemic detail, and that the neural representation of isolated speech sounds includes concurrent phonemic and subphonemic information.