首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
BackgroundThe key symptoms of auditory neglect include left extinction on tasks of dichotic and/or diotic listening and rightward shift in locating sounds. The anatomical correlates of the latter are relatively well understood, but no systematic studies have examined auditory extinction. Here, we performed a systematic study of anatomo-clinical correlates of extinction by using dichotic and/or diotic listening tasks.MethodsIn total, 20 patients with right hemispheric damage (RHD) and 19 with left hemispheric damage (LHD) performed dichotic and diotic listening tasks. Either task consists of the simultaneous presentation of word pairs; in the dichotic task, 1 word is presented to each ear, and in the diotic task, each word is lateralized by means of interaural time differences and presented to one side.Results and conclusionRHD was associated with exclusively contralesional extinction in dichotic or diotic listening, whereas in selected cases, LHD led to contra- or ipsilesional extinction. Bilateral symmetrical extinction occurred in RHD or LHD, with dichotic or diotic listening. The anatomical correlates of these extinction profiles offer an insight into the organisation of the auditory and attentional systems. First, left extinction in dichotic versus diotic listening involves different parts of the right hemisphere, which explains the double dissociation between these 2 neglect symptoms. Second, contralesional extinction in the dichotic task relies on homologous regions in either hemisphere. Third, ipsilesional extinction in dichotic listening after LHD was associated with lesions of the intrahemispheric white matter, interrupting callosal fibres outside their midsagittal or periventricular trajectory. Fourth, bilateral symmetrical extinction was associated with large parieto-fronto-temporal LHD or smaller parieto-temporal RHD, which suggests that divided attention, supported by the right hemisphere, and auditory streaming, supported by the left, likely play a critical role.  相似文献   

2.
Neural mechanisms underlying auditory feedback control of speech   总被引:1,自引:0,他引:1  
The neural substrates underlying auditory feedback control of speech were investigated using a combination of functional magnetic resonance imaging (fMRI) and computational modeling. Neural responses were measured while subjects spoke monosyllabic words under two conditions: (i) normal auditory feedback of their speech and (ii) auditory feedback in which the first formant frequency of their speech was unexpectedly shifted in real time. Acoustic measurements showed compensation to the shift within approximately 136 ms of onset. Neuroimaging revealed increased activity in bilateral superior temporal cortex during shifted feedback, indicative of neurons coding mismatches between expected and actual auditory signals, as well as right prefrontal and Rolandic cortical activity. Structural equation modeling revealed increased influence of bilateral auditory cortical areas on right frontal areas during shifted speech, indicating that projections from auditory error cells in posterior superior temporal cortex to motor correction cells in right frontal cortex mediate auditory feedback control of speech.  相似文献   

3.
The role of attention in speech comprehension is not well understood. We used fMRI to study the neural correlates of auditory word, pseudoword, and nonspeech (spectrally rotated speech) perception during a bimodal (auditory, visual) selective attention task. In three conditions, Attend Auditory (ignore visual), Ignore Auditory (attend visual), and Visual (no auditory stimulation), 28 subjects performed a one-back matching task in the assigned attended modality. The visual task, attending to rapidly presented Japanese characters, was designed to be highly demanding in order to prevent attention to the simultaneously presented auditory stimuli. Regardless of stimulus type, attention to the auditory channel enhanced activation by the auditory stimuli (Attend Auditory>Ignore Auditory) in bilateral posterior superior temporal regions and left inferior frontal cortex. Across attentional conditions, there were main effects of speech processing (word+pseudoword>rotated speech) in left orbitofrontal cortex and several posterior right hemisphere regions, though these areas also showed strong interactions with attention (larger speech effects in the Attend Auditory than in the Ignore Auditory condition) and no significant speech effects in the Ignore Auditory condition. Several other regions, including the postcentral gyri, left supramarginal gyrus, and temporal lobes bilaterally, showed similar interactions due to the presence of speech effects only in the Attend Auditory condition. Main effects of lexicality (word>pseudoword) were isolated to a small region of the left lateral prefrontal cortex. Examination of this region showed significant word>pseudoword activation only in the Attend Auditory condition. Several other brain regions, including left ventromedial frontal lobe, left dorsal prefrontal cortex, and left middle temporal gyrus, showed Attention x Lexicality interactions due to the presence of lexical activation only in the Attend Auditory condition. These results support a model in which neutral speech presented in an unattended sensory channel undergoes relatively little processing beyond the early perceptual level. Specifically, processing of phonetic and lexical-semantic information appears to be very limited in such circumstances, consistent with prior behavioral studies.  相似文献   

4.
In the present experiment, 25 adult subjects discriminated speech tokens ([ba]/[da]) or made pitch judgments on tone stimuli (rising/falling) under both binaural and dichotic listening conditions. We observed that when listeners performed tasks under the dichotic conditions, during which greater demands are made on auditory selective attention, activation within the posterior (parietal) attention system and at primary processing sites in the superior temporal and inferior frontal regions was increased. The cingulate gyrus within the anterior attention system was not influenced by this manipulation. Hemispheric differences between speech and nonspeech tasks were also observed, both at Broca's Area within the inferior frontal gyrus and in the middle temporal gyrus.  相似文献   

5.
The neural correlates of speech monitoring overlap with neural correlates of speech comprehension and production. However, it is unclear how these correlates are organized within functional connectivity networks, and how these networks interact to subserve speech monitoring. We applied spatial and temporal independent component analysis (sICA and tICA) to a functional magnetic resonance imaging (fMRI) experiment involving overt speech production, comprehension and monitoring. SICA and tICA respectively decompose fMRI data into spatial and temporal components that can be interpreted as distributed estimates of functional connectivity and concurrent temporal dynamics in one or more regions of fMRI activity. Using sICA we found multiple connectivity components that were associated with speech perception (auditory and left fronto-temporal components) and production (bilateral central sulcus and default-mode components), but not with speech monitoring. In order to further investigate if speech monitoring could be mapped in the auditory cortex as a unique temporal process, we applied tICA to voxels of the sICA auditory component. Amongst the temporal components we found a single, unique component that matched the speech monitoring temporal pattern. We used this temporal component as a new predictor for whole-brain activity and found that it correlated positively with bilateral auditory cortex, and negatively with the supplementary motor area (SMA). Psychophysiological interaction analysis of task and activity in bilateral auditory cortex and SMA showed that functional connectivity changed with task conditions. These results suggest that speech monitoring entails a dynamic coupling between different functional networks. Furthermore, we demonstrate that overt speech comprises multiple networks that are associated with specific speech-related processes. We conclude that the sequential combination of sICA and tICA is a powerful approach for the analysis of complex, overt speech tasks.  相似文献   

6.
The present study used fMRI to investigate the relationship between stimulus presentation mode and attentional instruction in a free-report dichotic listening (DL) task with consonant-vowel (CV) syllables. Binaural and dichotic CV syllables were randomly presented to the subjects during four different instructional conditions: a passive listening instruction and three active instructions where subjects listened to both ears, right ear and left ear, respectively. The results showed that dichotic presentations activated areas in the superior temporal gyrus, middle and inferior frontal gyrus and the cingulate cortex to a larger extent than binaural presentations. Moreover, the results showed that increase of activation in these areas was differentially dependent on presentation mode and attentional instruction. Thus, it seems that speech perception, as studied with the DL procedure, involves a cortical network extending beyond primary speech perception areas in the brain, also including prefrontal cortex.  相似文献   

7.
Neurophysiological research suggests that understanding the actions of others harnesses neural circuits that would be used to produce those actions directly. We used fMRI to examine brain areas active during language comprehension in which the speaker was seen and heard while talking (audiovisual) or heard but not seen (audio-alone) or when the speaker was seen talking with the audio track removed (video-alone). We found that audiovisual speech perception activated a network of brain regions that included cortical motor areas involved in planning and executing speech production and areas subserving proprioception related to speech production. These regions included the posterior part of the superior temporal gyrus and sulcus, the pars opercularis, premotor cortex, adjacent primary motor cortex, somatosensory cortex, and the cerebellum. Activity in premotor cortex and posterior superior temporal gyrus and sulcus was modulated by the amount of visually distinguishable phonemes in the stories. None of these regions was activated to the same extent in the audio- or video-alone conditions. These results suggest that integrating observed facial movements into the speech perception process involves a network of multimodal brain regions associated with speech production and that these areas contribute less to speech perception when only auditory signals are present. This distributed network could participate in recognition processing by interpreting visual information about mouth movements as phonetic information based on motor commands that could have generated those movements.  相似文献   

8.
Although several neuroimaging studies have reported pitch-evoked activations at the lateral end of Heschl's gyrus, it is still under debate whether these findings truly represent activity in relation to the perception of pitch or merely stimulus-related features of pitch-evoking sounds. We investigated this issue in a functional magnetic resonance imaging (fMRI) experiment using pure tones in noise and dichotic pitch sequences, which either contained a melody or a fixed pitch. Dichotic pitch evokes a sensation of pitch only in binaural listening conditions, while the monaural signal cannot be distinguished from random noise. Our data show similar neural activations for both tones in noise and dichotic pitch, which are perceptually similar, but physically different. Pitch-related activation was found at the lateral end of Heschl's gyrus in both hemispheres, providing new evidence for a general involvement of this region in pitch processing. In line with prior studies, we found melody-related activation in Planum temporale and Planum polare, but not in primary auditory areas. These results support the view of a general representation of pitch in auditory cortex, irrespective of the physical attributes of the pitch-evoking sound.  相似文献   

9.
The purpose of this study was to reveal functional areas of the brain modulating processing of selective auditory or visual attention toward utterances. Regional cerebral blood flow was measured in six normal volunteers using positron emission tomography during two selective attention tasks and a control condition. The auditory task activated the auditory, inferior parietal, prefrontal, and anterior cingulate cortices. The visual task activated the visual association, inferior parietal, and prefrontal cortices. Both conditions activated the same area in the superior temporal sulcus. During the visual task, deactivation was observed in the auditory cortex. These results indicate that there exists a modality-dependent selective attention mechanism which activates or deactivates cortical areas in different ways.  相似文献   

10.
Comprehension of information conveyed by the tone of voice is highly important for successful social interactions (Grandjean et al., 2006). Based on lesion data, a superiority of the right hemisphere for cerebral processing of speech prosody has been assumed. According to an early neuroanatomical model, prosodic information is encoded within distinct right-sided perisylvian regions which are organized in complete analogy to the left-sided language areas (Ross, 1981). While the majority of lesion studies are in line with the assumption that the right temporal cortex is highly important for the comprehension of speech melody (Adolphs et al., 2001; Borod et al., 2002; Heilman et al., 1984), some studies indicate a widespread network of partially bilateral cerebral regions to contribute to prosody processing including the frontal cortex (Adolphs et al., 2002; Hornak et al., 2003; Rolls, 1999) and the basal ganglia (Cancellieve & Kertesz, 1990; Pell & Leonard, 2003). More recently, functional imaging experiments have helped to differentiate specific functions of distinct brain areas contributing to recognition of speech prosody (Ackermann et al., 2004; Schirmer & Kotz, 2006; Wildgruber et al., 2006). Observations in healthy subjects indicate a strong association of cerebral responses and acoustic voice properties in some regions (stimulus-driven effects), whereas other areas show modulation of activation linked to the focusing of attention to specific task components (task-dependent effects). Here we present a refined model of prosody processing and cross-modal integration of emotional signals from face and voice which differentiates successive steps of cerebral processing involving auditory analysis and multimodal integration of communicative signals within the temporal cortex and evaluative judgements within the frontal lobes.  相似文献   

11.
Sensory-motor interactions between auditory and articulatory representations in the dorsal auditory processing stream are suggested to contribute to speech perception, especially when bottom-up information alone is insufficient for purely auditory perceptual mechanisms to succeed. Here, we hypothesized that the dorsal stream responds more vigorously to auditory syllables when one is engaged in a phonetic identification/repetition task subsequent to perception compared to passive listening, and that this effect is further augmented when the syllables are embedded in noise. To this end, we recorded magnetoencephalography while twenty subjects listened to speech syllables, with and without noise masking, in four conditions: passive perception; overt repetition; covert repetition; and overt imitation. Compared to passive listening, left-hemispheric N100m equivalent current dipole responses were amplified and shifted posteriorly when perception was followed by covert repetition task. Cortically constrained minimum-norm estimates showed amplified left supramarginal and angylar gyri responses in the covert repetition condition at ~100ms from stimulus onset. Longer-latency responses at ~200ms were amplified in the covert repetition condition in the left angular gyrus and in all three active conditions in the left premotor cortex, with further enhancements when the syllables were embedded in noise. Phonetic categorization accuracy and magnitude of voice pitch change between overt repetition and imitation conditions correlated with left premotor cortex responses at ~100 and ~200ms, respectively. Together, these results suggest that the dorsal stream involvement in speech perception is dependent on perceptual task demands and that phonetic categorization performance is influenced by the left premotor cortex.  相似文献   

12.
Toyomura A  Fujii T  Kuriki S 《NeuroImage》2011,57(4):1507-1516
External auditory pacing, such as metronome sound and speaking in unison with others, has a fluency-enhancing effect in stuttering speakers. The present study investigated the neural mechanism of the fluency-enhancing effect by using functional magnetic resonance imaging (fMRI). 12 stuttering speakers and 12 nonstuttering controls were scanned while performing metronome-timed speech, choral speech, and normal speech. Compared to nonstuttering controls, stuttering speakers showed a significantly greater increase in activation in the superior temporal gyrus under both metronome-timed and choral speech conditions relative to a normal speech condition. The caudate, globus pallidus, and putamen of the basal ganglia showed clearly different patterns of signal change from rest among the different conditions and between stuttering and nonstuttering speakers. The signal change of stuttering speakers was significantly lower than that of nonstuttering controls under the normal speech condition but was raised to the level of the controls, with no intergroup difference, in metronome-timed speech. In contrast, under the chorus condition the signal change of stuttering speakers remained lower than that of the controls. Correlation analysis further showed that the signal change of the basal ganglia and motor areas was negatively correlated with stuttering severity, but it was not significantly correlated with the stuttering rate during MRI scanning. These findings shed light on the specific neural processing of stuttering speakers when they time their speech to auditory stimuli, and provide additional evidence of the efficacy of external auditory pacing.  相似文献   

13.
We investigated the perception and categorization of speech (vowels, syllables) and non-speech (tones, tonal contours) stimuli using MEG. In a delayed-match-to-sample paradigm, participants listened to two sounds and decided if they sounded exactly the same or different (auditory discrimination, AUD), or if they belonged to the same or different categories (category discrimination, CAT). Stimuli across the two conditions were identical; the category definitions for each kind of sound were learned in a training session before recording. MEG data were analyzed using an induced wavelet transform method to investigate task-related differences in time-frequency patterns. In auditory cortex, for both AUD and CAT conditions, an alpha (8-13 Hz) band activation enhancement during the delay period was found for all stimulus types. A clear difference between AUD and CAT conditions was observed for the non-speech stimuli in auditory areas and for both speech and non-speech stimuli in frontal areas. The results suggest that alpha band activation in auditory areas is related to both working memory and categorization for new non-speech stimuli. The fact that the dissociation between speech and non-speech occurred in auditory areas, but not frontal areas, points to different categorization mechanisms and networks for newly learned (non-speech) and natural (speech) categories.  相似文献   

14.
In a crowded scene we can effectively focus our attention on a specific speaker while largely ignoring sensory inputs from other speakers. How attended speech inputs are extracted from similar competing information has been primarily studied in the auditory domain. Here we examined the deployment of visuo-spatial attention in multiple speaker scenarios. Steady-state visual evoked potentials (SSVEP) were monitored as a real-time index of visual attention towards three competing speakers. Participants were instructed to detect a target syllable by the center speaker and ignore syllables from two flanking speakers. The study incorporated interference trials (syllables from three speakers), no-interference trials (syllable from center speaker only), and periods without speech stimulation in which static faces were presented. An enhancement of flanking speaker induced SSVEP was found 70-220 ms after sound onset over left temporal scalp during interference trials. This enhancement was negatively correlated with the behavioral performance of participants -- those who showed largest enhancements had the worst speech recognition performance. Additionally, poorly performing participants exhibited enhanced flanking speaker induced SSVEP over visual scalp during periods without speech stimulation. The present study provides neurophysiologic evidence that the deployment of visuo-spatial attention to flanking speakers interferes with the recognition of multisensory speech signals under noisy environmental conditions.  相似文献   

15.
Wilson SM  Iacoboni M 《NeuroImage》2006,33(1):316-325
Neural responses to unfamiliar non-native phonemes varying in the extent to which they can be articulated were studied with functional magnetic resonance imaging (fMRI). Both superior temporal (auditory) and precentral (motor) areas were activated by passive speech perception, and both distinguished non-native from native phonemes, with greater signal change in response to non-native phonemes. Furthermore, speech-responsive motor regions and superior temporal sites were functionally connected. However, only in auditory areas did activity covary with the producibility of non-native phonemes. These data suggest that auditory areas are crucial for the transformation from acoustic signal to phonetic code, but the motor system also plays an active role, which may involve the internal generation of candidate phonemic categorizations. These 'motor' categorizations would then be compared to the acoustic input in auditory areas. The data suggest that speech perception is neither purely sensory nor motor, but rather a sensorimotor process.  相似文献   

16.
An fMRI investigation of syllable sequence production   总被引:2,自引:0,他引:2  
Bohland JW  Guenther FH 《NeuroImage》2006,32(2):821-841
Fluent speech comprises sequences that are composed from a finite alphabet of learned words, syllables, and phonemes. The sequencing of discrete motor behaviors has received much attention in the motor control literature, but relatively little has been focused directly on speech production. In this paper, we investigate the cortical and subcortical regions involved in organizing and enacting sequences of simple speech sounds. Sparse event-triggered functional magnetic resonance imaging (fMRI) was used to measure responses to preparation and overt production of non-lexical three-syllable utterances, parameterized by two factors: syllable complexity and sequence complexity. The comparison of overt production trials to preparation only trials revealed a network related to the initiation of a speech plan, control of the articulators, and to hearing one's own voice. This network included the primary motor and somatosensory cortices, auditory cortical areas, supplementary motor area (SMA), the precentral gyrus of the insula, and portions of the thalamus, basal ganglia, and cerebellum. Additional stimulus complexity led to increased engagement of the basic speech network and recruitment of additional areas known to be involved in sequencing non-speech motor acts. In particular, the left hemisphere inferior frontal sulcus and posterior parietal cortex, and bilateral regions at the junction of the anterior insula and frontal operculum, the SMA and pre-SMA, the basal ganglia, anterior thalamus, and the cerebellum showed increased activity for more complex stimuli. We hypothesize mechanistic roles for the extended speech production network in the organization and execution of sequences of speech sounds.  相似文献   

17.
18.
Kondo H  Osaka N  Osaka M 《NeuroImage》2004,23(2):670-679
Attention shifting in the working memory system plays an important role in goal-oriented behavior, such as reading, reasoning, and driving, because it involves several cognitive processes. This study identified brain activity leading to individual differences in attention shifting for dual-task performance by using the group comparison approach. A large-scale pilot study was initially conducted to select suitable good and poor performers. The fMRI experiment consisted of a dual-task condition and two single-task conditions. Under the dual-task condition, participants verified the status of letters while concurrently retaining arrow orientations. The behavioral results indicated that accuracy in arrow recognition was better in the good performers than in the poor performers under the dual-task condition but not under the single-task condition. Dual-task performance showed a positive correlation with mean signal change in the right anterior cingulate cortex (ACC) and right dorsolateral prefrontal cortex (DLPFC). Structural equation modeling indicated that effective connectivity between the right ACC and right DLPFC was present in the good performers but not in the poor performers, although activations of the task-dependent posterior regions were modulated by the right ACC and right DLPFC. We conclude that individual differences in attention shifting heavily depend on the functional efficiency of the cingulo-prefrontal network.  相似文献   

19.
Mortensen MV  Mirz F  Gjedde A 《NeuroImage》2006,31(2):842-852
The left inferior prefrontal cortex (LIPC) is involved in speech comprehension by people who hear normally. In contrast, functional brain mapping has not revealed incremental activity in this region when users of cochlear implants comprehend speech without silent repetition. Functional brain maps identify significant changes of activity by comparing an active brain state with a presumed baseline condition. It is possible that cochlear implant users recruited alternative neuronal resources to the task in previous studies, but, in principle, it is also possible that an aberrant baseline condition masked the functional increase. To distinguish between the two possibilities, we tested the hypothesis that activity in the LIPC characterizes high speech comprehension in postlingually deaf CI users. We measured cerebral blood flow changes with positron emission tomography (PET) in CI users who listened passively to a range of speech and non-speech stimuli. The pattern of activation varied with the stimulus in users with high speech comprehension, unlike users with low speech comprehension. The high-comprehension group increased the activity in prefrontal and temporal regions of the cerebral cortex and in the right cerebellum. In these subjects, single words and speech raised activity in the LIPC, as well as in left and right temporal regions, both anterior and posterior, known to be activated in speech recognition and complex phoneme analysis in normal hearing. In subjects with low speech comprehension, sites of increased activity were observed only in the temporal lobes. We conclude that increased activity in areas of the LIPC and right temporal lobe is involved in speech comprehension after cochlear implantation.  相似文献   

20.
While the precise role of the anterior cingulate cortex (ACC) is still being discussed, it has been suggested that ACC activity might reflect the amount of mental effort associated with cognitive processing. So far, not much is known about the temporal dynamics of ACC activity in effort-related decision making or auditory attention, because fMRI is limited concerning its temporal resolution and electroencephalography (EEG) is limited concerning its spatial resolution. Single-trial coupling of EEG and fMRI can be used to predict the BOLD signal specifically related to amplitude variations of electrophysiological components. The striking feature of single-trial coupling is its ability to separate different aspects of the BOLD signal according to their specific relationship to a distinct neural process. In the present study we investigated 10 healthy subjects with a forced choice reaction task under both low and high effort conditions and a control condition (passive listening) using simultaneous EEG and fMRI. We detected a significant effect of mental effort only for the N1 potential, but not for the P300 potential. In the fMRI analysis, ACC activation was present only in the high effort condition. We used single-trial coupling of EEG and fMRI in order to separate information specific to N1-amplitude variations from the unrelated BOLD response. Under high effort conditions we were able to detect circumscribed BOLD activations specific to the N1 potential in the ACC (t=4.7) and the auditory cortex (t=6.1). Comparing the N1-specific BOLD activity of the high effort condition versus the control condition we found only activation of the ACC (random effects analysis, corrected for multiple comparisons, t=4.4). These findings suggest a role of early ACC activation in effort-related decision making and provide a direct link between the N1 component and its corresponding BOLD signal.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号