Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
TocHeadingTitle
Date
Availability
1-3 of 3
Mairéad MacSweeney
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2013) 25 (7): 1037–1048.
Published: 01 July 2013
FIGURES
| View All (4)
Abstract
View article
PDF
We used electrophysiology to determine the time course and distribution of neural activation during an English word rhyme task in hearing and congenitally deaf adults. Behavioral performance by hearing participants was at ceiling and their ERP data replicated two robust effects repeatedly observed in the literature. First, a sustained negativity, termed the contingent negative variation, was elicited following the first stimulus word. This negativity was asymmetric, being more negative over the left than right sites. The second effect we replicated in hearing participants was an enhanced negativity (N450) to nonrhyming second stimulus words. This was largest over medial, parietal regions of the right hemisphere. Accuracy on the rhyme task by the deaf group as a whole was above chance level, yet significantly poorer than hearing participants. We examined only ERP data from deaf participants who performed the task above chance level ( n = 9). We observed indications of subtle differences in ERP responses between deaf and hearing groups. However, overall the patterns in the deaf group were very similar to that in the hearing group. Deaf participants, just as hearing participants, showed greater negativity to nonrhyming than rhyming words. Furthermore the onset latency of this effect was the same as that observed in hearing participants. Overall, the neural processes supporting explicit phonological judgments are very similar in deaf and hearing people, despite differences in the modality of spoken language experience. This supports the suggestion that phonological processing is to a large degree amodal or supramodal.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2008) 20 (7): 1220–1234.
Published: 01 July 2008
Abstract
View article
PDF
Spoken languages use one set of articulators—the vocal tract, whereas signed languages use multiple articulators, including both manual and facial actions. How sensitive are the cortical circuits for language processing to the particular articulators that are observed? This question can only be addressed with participants who use both speech and a signed language. In this study, we used functional magnetic resonance imaging to compare the processing of speechreading and sign processing in deaf native signers of British Sign Language (BSL) who were also proficient speechreaders. The following questions were addressed: To what extent do these different language types rely on a common brain network? To what extent do the patterns of activation differ? How are these networks affected by the articulators that languages use? Common peri-sylvian regions were activated both for speechreading English words and for BSL signs. Distinctive activation was also observed reflecting the language form. Speechreading elicited greater activation in the left mid-superior temporal cortex than BSL, whereas BSL processing generated greater activation at the temporo-parieto-occipital junction in both hemispheres. We probed this distinction further within BSL, where manual signs can be accompanied by different types of mouth action. BSL signs with speech-like mouth actions showed greater superior temporal activation, whereas signs made with non-speech-like mouth actions showed more activation in posterior and inferior temporal regions. Distinct regions within the temporal cortex are not only differentially sensitive to perception of the distinctive articulators for speech and for sign but also show sensitivity to the different articulators within the (signed) language.
Journal Articles
Publisher: Journals Gateway
Journal of Cognitive Neuroscience (2002) 14 (7): 1064–1075.
Published: 01 October 2002
Abstract
View article
PDF
In all signed languages used by deaf people, signs are executed in “sign space” in front of the body. Some signed sentences use this space to map detailed “real-world” spatial relationships directly. Such sentences can be considered to exploit sign space “topographically.” Using functional magnetic resonance imaging, we explored the extent to which increasing the topographic processing demands of signed sentences was reflected in the differential recruitment of brain regions in deaf and hearing native signers of the British Sign Language. When BSL signers performed a sentence anomaly judgement task, the occipito-temporal junction was activated bilaterally to a greater extent for topographic than nontopo-graphic processing. The differential role of movement in the processing of the two sentence types may account for this finding. In addition, enhanced activation was observed in the left inferior and superior parietal lobules during processing of topographic BSL sentences. We argue that the left parietal lobe is specifically involved in processing the precise configuration and location of hands in space to represent objects, agents, and actions. Importantly, no differences in these regions were observed when hearing people heard and saw English translations of these sentences. Despite the high degree of similarity in the neural systems underlying signed and spoken languages, exploring the linguistic features which are unique to each of these broadens our understanding of the systems involved in language comprehension.