首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In the present experiment, 25 adult subjects discriminated speech tokens ([ba]/[da]) or made pitch judgments on tone stimuli (rising/falling) under both binaural and dichotic listening conditions. We observed that when listeners performed tasks under the dichotic conditions, during which greater demands are made on auditory selective attention, activation within the posterior (parietal) attention system and at primary processing sites in the superior temporal and inferior frontal regions was increased. The cingulate gyrus within the anterior attention system was not influenced by this manipulation. Hemispheric differences between speech and nonspeech tasks were also observed, both at Broca's Area within the inferior frontal gyrus and in the middle temporal gyrus.  相似文献   

2.
Frühholz S  Grandjean D 《NeuroImage》2012,62(3):1658-1666
Vocal expressions commonly elicit activity in superior temporal and inferior frontal cortices, indicating a distributed network to decode vocally expressed emotions. We examined the involvement of this fronto-temporal network for the decoding of angry voices during attention towards (explicit attention) or away from emotional cues in voices (implicit attention) based on a reanalysis of previous data (Frühholz, S., Ceravolo, L., Grandjean, D., 2012. Cerebral Cortex 22, 1107-1117). The general network revealed high interconnectivity of bilateral inferior frontal gyrus (IFG) to different bilateral voice-sensitive regions in mid and posterior superior temporal gyri. Right superior temporal gyrus (STG) regions showed connectivity to the left primary auditory cortex and secondary auditory cortex (AC) as well as to high-level auditory regions. This general network revealed differences in connectivity depending on the attentional focus. Explicit attention to angry voices revealed a specific right-left STG network connecting higher-level AC. During attention to a nonemotional vocal feature we also found a left-right STG network implicitly elicited by angry voices that also included low-level left AC. Furthermore, only during this implicit processing there was widespread interconnectivity between bilateral IFG and bilateral STG. This indicates that while implicit attention to angry voices recruits extended bilateral STG and IFG networks for the sensory and evaluative decoding of voices, explicit attention to angry voices solely involves a network of bilateral STG regions probably for the integrative recognition of emotional cues from voices.  相似文献   

3.
Previous electrophysiological and neuroimaging studies suggest that the mismatch negativity (MMN) is generated by a temporofrontal network subserving preattentive auditory change detection. In two experiments we employed event-related brain potentials (ERP) and event-related functional magnetic resonance imaging (fMRI) to examine neural and hemodynamic activity related to deviance processing, using three types of deviant tones (small, medium, and large) in both a pitch and a space condition. In the pitch condition, hemodynamic activity in the right superior temporal gyrus (STG) increased as a function of deviance. Comparisons between small and medium and between small and large deviants revealed right prefrontal activation in the inferior frontal gyrus (IFG; BA 44/45) and middle frontal gyrus (MFG; BA 46), whereas large relative to medium deviants led to left and right IFG (BA 44/45) activation. In the ERP experiment the amplitude of the early MMN (90-120 ms) increased as a function of deviance, by this paralleling the right STG activation in the fMRI experiment. A U-shaped relationship between MMN amplitude and the degree of deviance was observed in a late time window (140-170 ms) resembling the right IFG activation pattern. In a subsequent source analysis constrained by fMRI activation foci, early and late MMN activity could be modeled by dipoles placed in the STG and IFG, respectively. In the spatial condition no reliable hemodynamic activation could be observed. The MMN amplitude was substantially smaller than in the pitch condition for all three spatial deviants in the ERP experiment. In contrast to the pitch condition it increased as a function of deviance in the early and in the late time window. We argue that the right IFG mediates auditory deviance detection in case of low discriminability between a sensory memory trace and auditory input. This prefrontal mechanism might be part of top-down modulation of the deviance detection system in the STG.  相似文献   

4.
Neuroimaging studies of auditory and visual phonological processing have revealed activation of the left inferior and middle frontal gyri. However, because of task differences in these studies (e.g., consonant discrimination versus rhyming), the extent to which this frontal activity is due to modality-specific linguistic processes or to more general task demands involved in the comparison and storage of stimuli remains unclear. An fMRI experiment investigated the functional neuroanatomical basis of phonological processing in discrimination and rhyming tasks across auditory and visual modalities. Participants made either "same/different" judgments on the final consonant or rhyme judgments on auditorily or visually presented pairs of words and pseudowords. Control tasks included "same/different" judgments on pairs of single tones or false fonts and on the final member in pairs of sequences of tones or false fonts. Although some regions produced expected modality-specific activation (i.e., left superior temporal gyrus in auditory tasks, and right lingual gyrus in visual tasks), several regions were active across modalities and tasks, including posterior inferior frontal gyrus (BA 44). Greater articulatory recoding demands for processing of pseudowords resulted in increased activation for pseudowords relative to other conditions in this frontal region. Task-specific frontal activation was observed for auditory pseudoword final consonant discrimination, likely due to increased working memory demands of selection (ventrolateral prefrontal cortex) and monitoring (mid-dorsolateral prefrontal cortex). Thus, the current study provides a systematic comparison of phonological tasks across modalities, with patterns of activation corresponding to the cognitive demands of performing phonological judgments on spoken and written stimuli.  相似文献   

5.
Hashimoto T  Usui N  Taira M  Nose I  Haji T  Kojima S 《NeuroImage》2006,31(4):1762-1770
This event-related fMRI study was conducted to examine the blood-oxygen-level-dependent responses to the processing of auditory onomatopoeic sounds. We used a sound categorization task in which the participants heard four types of stimuli: onomatopoeic sounds, nouns (verbal), animal (nonverbal) sounds, and pure tone/noise (control). By discriminating between the categories of target sounds (birds/nonbirds), the nouns resulted in activations in the left anterior superior temporal gyrus (STG), whereas the animal sounds resulted in activations in the bilateral superior temporal sulcus (STS) and the left inferior frontal gyrus (IFG). In contrast, the onomatopoeias activated extensive brain regions, including the left anterior STG, the region from the bilateral STS to the middle temporal gyrus, and the bilateral IFG. The onomatopoeic sounds showed greater activation in the right middle STS than did the nouns and environmental sounds. These results indicate that onomatopoeic sounds are processed by extensive brain regions involved in the processing of both verbal and nonverbal sounds. Thus, we can posit that onomatopoeic sounds can serve as a bridge between nouns and animal sounds. This is the first evidence to demonstrate the way in which onomatopoeic sounds are processed in the human brain.  相似文献   

6.
The analysis of auditory deviant events outside the focus of attention is a fundamental capacity of human information processing and has been studied in experiments on Mismatch Negativity (MMN) and the P3a component in evoked potential research. However, generators contributing to these components are still under discussion. Here we assessed cortical blood flow to auditory stimulation in three conditions. Six healthy subjects were presented with standard tones, frequency deviant tones (MMN condition), and complex novel sounds (Novelty condition), while attention was directed to a nondemanding visual task. Analysis of the MMN condition contrasted with thestandard condition revealed blood flow changes in the left and right superior temporal gyrus, right superior temporal sulcus and left inferior frontal gyrus. Complex novel sounds contrasted with the standard condition activated the left superior temporal gyrus and the left inferior and middle frontal gyrus. A small subcortical activation emerged in the left parahippocampal gyrus and an extended activation was found covering the right superior temporal gyrus. Novel sounds activated the right inferior frontal gyrus when controlling for deviance probability. In contrast to previous studies our results indicate a left hemisphere contribution to a frontotemporal network of auditory deviance processing. Our results provide further evidence for a contribution of the frontal cortex to the processing of auditory deviance outside the focus of directed attention.  相似文献   

7.
The neural substrates underlying speech perception are still not well understood. Previously, we found dissociation of speech and nonspeech processing at the earliest cortical level (AI), using speech and nonspeech complexity dimensions. Acoustic differences between speech and nonspeech stimuli in imaging studies, however, confound the search for linguistic-phonetic regions. Presently, we used sinewave speech (SWsp) and nonspeech (SWnon), which replace speech formants with sinewave tones, in order to match acoustic spectral and temporal complexity while contrasting phonetics. Chord progressions (CP) were used to remove the effects of auditory coherence and object processing. Twelve normal RH volunteers were scanned with fMRI while listening to SWsp, SWnon, CP, and a baseline condition arranged in blocks. Only two brain regions, in bilateral superior temporal sulcus, extending more posteriorly on the left, were found to prefer the SWsp condition after accounting for acoustic modulation and coherence effects. Two regions responded preferentially to the more frequency-modulated stimuli, including one that overlapped the right temporal phonetic area and another in the left angular gyrus far from the phonetic area. These findings are proposed to form the basis for the two subtypes of auditory word deafness. Several brain regions, including auditory and non-auditory areas, preferred the coherent auditory stimuli and are likely involved in auditory object recognition. The design of the current study allowed for separation of acoustic spectrotemporal, object recognition, and phonetic effects resulting in distinct and overlapping components.  相似文献   

8.
The current study examined developmental changes in activation and effective connectivity among brain regions during a phonological processing task, using fMRI. Participants, ages 9-15, were scanned while performing rhyming judgments on pairs of visually presented words. The orthographic and phonological similarity between words in the pair was independently manipulated, so that rhyming judgment could not be based on orthographic similarity. Our results show a developmental increase in activation in the dorsal part of left inferior frontal gyrus (IFG), accompanied by a decrease in the dorsal part of left superior temporal gyrus (STG). The coupling of dorsal IFG with other selected brain regions involved in the phonological decision increased with age, while the coupling of STG decreased with age. These results suggest that during development there is a shift from reliance on sensory auditory representations to reliance on phonological segmentation and covert articulation for performing rhyming judgment on visually presented words. In addition, we found a developmental increase in activation in left posterior parietal cortex that was not accompanied by a change in its connectivity with the other regions. These results suggest that maturational changes within a cortical region are not necessarily accompanied by an increase in its interactions with other regions and its contribution to the task. Our results are consistent with the idea that there is reduced reliance on primary sensory processes as task-relevant processes mature and become more efficient during development.  相似文献   

9.
The role of attention in speech comprehension is not well understood. We used fMRI to study the neural correlates of auditory word, pseudoword, and nonspeech (spectrally rotated speech) perception during a bimodal (auditory, visual) selective attention task. In three conditions, Attend Auditory (ignore visual), Ignore Auditory (attend visual), and Visual (no auditory stimulation), 28 subjects performed a one-back matching task in the assigned attended modality. The visual task, attending to rapidly presented Japanese characters, was designed to be highly demanding in order to prevent attention to the simultaneously presented auditory stimuli. Regardless of stimulus type, attention to the auditory channel enhanced activation by the auditory stimuli (Attend Auditory>Ignore Auditory) in bilateral posterior superior temporal regions and left inferior frontal cortex. Across attentional conditions, there were main effects of speech processing (word+pseudoword>rotated speech) in left orbitofrontal cortex and several posterior right hemisphere regions, though these areas also showed strong interactions with attention (larger speech effects in the Attend Auditory than in the Ignore Auditory condition) and no significant speech effects in the Ignore Auditory condition. Several other regions, including the postcentral gyri, left supramarginal gyrus, and temporal lobes bilaterally, showed similar interactions due to the presence of speech effects only in the Attend Auditory condition. Main effects of lexicality (word>pseudoword) were isolated to a small region of the left lateral prefrontal cortex. Examination of this region showed significant word>pseudoword activation only in the Attend Auditory condition. Several other brain regions, including left ventromedial frontal lobe, left dorsal prefrontal cortex, and left middle temporal gyrus, showed Attention x Lexicality interactions due to the presence of lexical activation only in the Attend Auditory condition. These results support a model in which neutral speech presented in an unattended sensory channel undergoes relatively little processing beyond the early perceptual level. Specifically, processing of phonetic and lexical-semantic information appears to be very limited in such circumstances, consistent with prior behavioral studies.  相似文献   

10.
It is commonly assumed that, in the cochlea and the brainstem, the auditory system processes speech sounds without differentiating them from any other sounds. At some stage, however, it must treat speech sounds and nonspeech sounds differently, since we perceive them as different. The purpose of this study was to delimit the first location in the auditory pathway that makes this distinction using functional MRI, by identifying regions that are differentially sensitive to the internal structure of speech sounds as opposed to closely matched control sounds. We analyzed data from nine right-handed volunteers who were scanned while listening to natural and synthetic vowels, or to nonspeech stimuli matched to the vowel sounds in terms of their long-term energy and both their spectral and temporal profiles. The vowels produced more activation than nonspeech sounds in a bilateral region of the superior temporal sulcus, lateral and inferior to regions of auditory cortex that were activated by both vowels and nonspeech stimuli. The results suggest that the perception of vowel sounds is compatible with a hierarchical model of primate auditory processing in which early cortical stages of processing respond indiscriminately to speech and nonspeech sounds, and only higher regions, beyond anatomically defined auditory cortex, show selectivity for speech sounds.  相似文献   

11.
Evoked magnetic fields were recorded from 18 adult volunteers using magnetoencephalography (MEG) during perception of speech stimuli (the endpoints of a voice onset time (VOT) series ranging from /ga/ to /ka/), analogous nonspeech stimuli (the endpoints of a two-tone series varying in relative tone onset time (TOT), and a set of harmonically complex tones varying in pitch. During the early time window (approximately 60 to approximately 130 ms post-stimulus onset), activation of the primary auditory cortex was bilaterally equal in strength for all three tasks. During the middle (approximately 130 to 800 ms) and late (800 to 1400 ms) time windows of the VOT task, activation of the posterior portion of the superior temporal gyrus (STGp) was greater in the left hemisphere than in the right hemisphere, in both group and individual data. These asymmetries were not evident in response to the nonspeech stimuli. Hemispheric asymmetries in a measure of neurophysiological activity in STGp, which includes the supratemporal plane and cortex inside the superior temporal sulcus, may reflect a specialization of association auditory cortex in the left hemisphere for processing speech sounds. Differences in late activation patterns potentially reflect the operation of a postperceptual process (e.g., rehearsal in working memory) that is restricted to speech stimuli.  相似文献   

12.
We used intraoperative optical imaging of intrinsic signals (iOIS) and electrocortical stimulation mapping (ESM) to compare functionally active brain regions in 10 awake patients undergoing neurosurgical resection. Patients performed two to four tasks, including visual and auditory naming, word discrimination, and/or orofacial movements. All iOIS maps included areas identified by ESM mapping. However, iOIS also revealed topographical specificity dependent on language task. In Broca's area, naming paradigms activated both anterior and posterior inferior frontal gyrus (IFG), while the word discrimination paradigm activated only posterior IFG. In Wernicke's area, object naming produced activations localizing over the inferior and anterior/posterior regions, while the word discrimination task activated superior and anterior cortices. These results may suggest more posterior phonological activation and more anterior semantic activations in Broca's area, and more anterior/superior phonological activation and more posterior/inferior semantic activations in Wernicke's area. Although similar response onset was observed in Broca's and Wernicke's areas, temporal differences were revealed during block paradigm (20-s) activations. In Broca's area, block paradigms yielded a boxcar temporal activation profile (in all tasks) that resembled response profiles observed in motor cortex (with orofacial movements). In contrast, activations in Wernicke's area responded with a more dynamic profile (including early and late peaks) which varied with paradigm performance. Wernicke's area profiles were very similar to response profiles observed in sensory and visual cortex. The differing temporal patterns may therefore reflect unique processing performed by receptive (Wernicke's) and productive (Broca's) language centers. This study is consistent with task-specific semantic and phonologic regions within Broca's and Wernicke's areas and also is the first report of response profile differences dependent on cortical region and language task.  相似文献   

13.
Rimol LM  Specht K  Hugdahl K 《NeuroImage》2006,30(2):554-562
Previous neuroimaging studies have consistently reported bilateral activation to speech stimuli in the superior temporal gyrus (STG) and have identified an anteroventral stream of speech processing along the superior temporal sulcus (STS). However, little attention has been devoted to the possible confound of individual differences in hemispheric dominance for speech. The present study was designed to test for speech-selective activation while controlling for inter-individual variance in auditory laterality, by using only subjects with at least 10% right ear advantage (REA) on the dichotic listening test. Eighteen right-handed, healthy male volunteers (median age 26) participated in the study. The stimuli were words, syllables, and sine wave tones (220-2600 Hz), presented in a block design. Comparing words > tones and syllables > tones yielded activation in the left posterior MTG and the lateral STG (upper bank of STS). In the right temporal lobe, the activation was located in the MTG/STS (lower bank). Comparing left and right temporal lobe cluster sizes from the words > tones and syllables > tones contrasts on single-subject level demonstrated a statistically significant left lateralization for speech sound processing in the STS/MTG area. The asymmetry analyses suggest that dichotic listening may be a suitable method for selecting a homogenous group of subjects with respect to left hemisphere language dominance.  相似文献   

14.
Healthy subjects show increased activation in left temporal lobe regions in response to speech sounds compared to complex nonspeech sounds. Abnormal lateralization of speech-processing regions in the temporal lobes has been posited to be a cardinal feature of schizophrenia. Event-related fMRI was used to test the hypothesis that schizophrenic patients would show an abnormal pattern of hemispheric lateralization when detecting speech compared with complex nonspeech sounds in an auditory oddball target-detection task. We predicted that differential activation for speech in the vicinity of the superior temporal sulcus would be greater in schizophrenic patients than in healthy subjects in the right hemisphere, but less in patients than in healthy subjects in the left hemisphere. Fourteen patients with schizophrenia (selected from an outpatient population, 2 females, 12 males, mean age 35.1 years) and 29 healthy subjects (8 females, 21 males, mean age 29.3 years) were scanned while they performed an auditory oddball task in which the oddball stimuli were either speech sounds or complex nonspeech sounds. Compared to controls, individuals with schizophrenia showed greater differential activation between speech and nonspeech in right temporal cortex, left superior frontal cortex, and the left temporal-parietal junction. The magnitude of the difference in the left temporal parietal junction was significantly correlated with severity of disorganized thinking. This study supports the hypothesis that aberrant functional lateralization of speech processing is an underlying feature of schizophrenia and suggests the magnitude of the disturbance in speech-processing circuits may be associated with severity of disorganized thinking.  相似文献   

15.
The discrimination of voice-onset time, an acoustic-phonetic cue to voicing in stop consonants, was investigated to explore the neural systems underlying the perception of a rapid temporal speech parameter. Pairs of synthetic stimuli taken from a [da] to [ta] continuum varying in voice-onset time (VOT) were presented for discrimination judgments. Participants exhibited categorical perception, discriminating 15-ms and 30-ms between-category comparisons and failing to discriminate 15-ms within-category comparisons. Contrastive analysis with a tone discrimination task demonstrated left superior temporal gyrus activation in all three VOT conditions with recruitment of additional regions, particularly the right inferior frontal gyrus and middle frontal gyrus for the 15-ms between-category stimuli. Hemispheric differences using anatomically defined regions of interest showed two distinct patterns with anterior regions showing more activation in the right hemisphere relative to the left hemisphere and temporal regions demonstrating greater activation in the left hemisphere relative to the right hemisphere. Activation in the temporal regions appears to reflect initial acoustic-perceptual analysis of VOT. Greater activation in the right hemisphere anterior regions may reflect increased processing demands, suggesting involvement of the right hemisphere when the acoustic distance between the stimuli are reduced and when the discrimination judgment becomes more difficult.  相似文献   

16.
Many techniques to study early functional brain development lack the whole-brain spatial resolution that is available with fMRI. We utilized a relatively novel method in which fMRI data were collected from children during natural sleep. Stimulus-evoked responses to auditory and visual stimuli as well as stimulus-independent functional networks were examined in typically developing 2-4-year-old children. Reliable fMRI data were collected from 13 children during presentation of auditory stimuli (tones, vocal sounds, and nonvocal sounds) in a block design. Twelve children were presented with visual flashing lights at 2.5 Hz. When analyses combined all three types of auditory stimulus conditions as compared to rest, activation included bilateral superior temporal gyri/sulci (STG/S) and right cerebellum. Direct comparisons between conditions revealed significantly greater responses to nonvocal sounds and tones than to vocal sounds in a number of brain regions including superior temporal gyrus/sulcus, medial frontal cortex and right lateral cerebellum. The response to visual stimuli was localized to occipital cortex. Furthermore, stimulus-independent functional connectivity MRI analyses (fcMRI) revealed functional connectivity between STG and other temporal regions (including contralateral STG) and medial and lateral prefrontal regions. Functional connectivity with an occipital seed was localized to occipital and parietal cortex. In sum, 2-4 year olds showed a differential fMRI response both between stimulus modalities and between stimuli in the auditory modality. Furthermore, superior temporal regions showed functional connectivity with numerous higher-order regions during sleep. We conclude that the use of sleep fMRI may be a valuable tool for examining functional brain organization in young children.  相似文献   

17.
Functional near-infrared spectroscopy (fNIRS) was used to investigate resting state connectivity of language areas including bilateral inferior frontal gyrus (IFG) and superior temporal gyrus (STG). Thirty-two subjects participated in the experiment, including twenty adults and twelve children. Spontaneous hemodynamic fluctuations were recorded, and then intra- and inter-hemispheric temporal correlations of these signals were computed. The correlations of all hemoglobin components were observed significantly higher for adults than children. Moreover, the differences for the STG were more significant than for the IFG. In the adult group, differences in the correlations between males and females were not significant. Our results suggest by measuring resting state intra- and inter-hemispheric correlations, fNIRS is able to provide qualitative and quantitative evaluation on the functioning of the cortical network.OCIS codes: (170.2655) Functional monitoring and imaging, (170.3880) Medical and biological imaging, (170.5380) Physiology  相似文献   

18.
Ozdemir E  Norton A  Schlaug G 《NeuroImage》2006,33(2):628-635
Using a modified sparse temporal sampling fMRI technique, we examined both shared and distinct neural correlates of singing and speaking. In the experimental conditions, 10 right-handed subjects were asked to repeat intoned ("sung") and non-intoned ("spoken") bisyllabic words/phrases that were contrasted with conditions controlling for pitch ("humming") and the basic motor processes associated with vocalization ("vowel production"). Areas of activation common to all tasks included the inferior pre- and post-central gyrus, superior temporal gyrus (STG), and superior temporal sulcus (STS) bilaterally, indicating a large shared network for motor preparation and execution as well as sensory feedback/control for vocal production. The speaking more than vowel-production contrast revealed activation in the inferior frontal gyrus most likely related to motor planning and preparation, in the primary sensorimotor cortex related to motor execution, and the middle and posterior STG/STS related to sensory feedback. The singing more than speaking contrast revealed additional activation in the mid-portions of the STG (more strongly on the right than left) and the most inferior and middle portions of the primary sensorimotor cortex. Our results suggest a bihemispheric network for vocal production regardless of whether the words/phrases were intoned or spoken. Furthermore, singing more than humming ("intoned speaking") showed additional right-lateralized activation of the superior temporal gyrus, inferior central operculum, and inferior frontal gyrus which may offer an explanation for the clinical observation that patients with non-fluent aphasia due to left hemisphere lesions are able to sing the text of a song while they are unable to speak the same words.  相似文献   

19.
Natural consonant-vowel syllables are reliably classified by most listeners as voiced or voiceless. However, our previous research [Liederman, J., Frye, R., Fisher, J.M., Greenwood, K., Alexander, R., 2005. A temporally dynamic context effect that disrupts voice onset time discrimination of rapidly successive stimuli. Psychon Bull Rev. 12, 380-386] suggests that among synthetic stimuli varying systematically in voice onset time (VOT), syllables that are classified reliably as voiceless are nonetheless perceived differently within and between listeners. This perceptual ambiguity was measured by variation in the accuracy of matching two identical stimuli presented in rapid succession. In the current experiment, we used magnetoencephalography (MEG) to examine the differential contribution of objective (i.e., VOT) and subjective (i.e., perceptual ambiguity) acoustic features on speech processing. Distributed source models estimated cortical activation within two regions of interest in the superior temporal gyrus (STG) and one in the inferior frontal gyrus. These regions were differentially modulated by VOT and perceptual ambiguity. Ambiguity strongly influenced lateralization of activation; however, the influence on lateralization was different in the anterior and middle/posterior portions of the STG. The influence of ambiguity on the relative amplitude of activity in the right and left anterior STG activity depended on VOT, whereas that of middle/posterior portions of the STG did not. These data support the idea that early cortical responses are bilaterally distributed whereas late processes are lateralized to the dominant hemisphere and support a "how/what" dual-stream auditory model. This study helps to clarify the role of the anterior STG, especially in the right hemisphere, in syllable perception. Moreover, our results demonstrate that both objective phonological and subjective perceptual characteristics of syllables independently modulate spatiotemporal patterns of cortical activation.  相似文献   

20.
Without special education, early deprivation of auditory speech input, hinders the development of phonological representations and may alter the neural mechanisms of reading. By using fMRI during lexical and rhyming decision tasks, we compared in hearing and pre-lingually deaf subjects the neural activity in functional regions of interest (ROIs) engaged in reading. The results show in deaf readers significantly higher activation in the ROIs relevant to the grapho-phonological route, but also in the posterior medial frontal cortex (pMFC) and the right inferior frontal gyrus (IFG). These adjustments may be interpreted within the dual route model of reading as an alternative strategy, which gives priority to rule-based letter-to-sound conversion. Activation in the right IFG would account for compensation mechanisms based on phonological recoding and inner speech while activation in the posterior medial frontal cortex (pMFC) may relate to the cognitive effort called for by the alternative strategy. Our data suggest that the neural mechanisms of reading are shaped by the auditory experience of speech.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号