首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Auditory hallucinations are thought to arise through the misidentification of self-generated verbal material as alien. The neural mechanisms that normally mediate the differentiation of self-generated from nonself speech are unclear. We investigated this in healthy volunteers using functional MRI. Eleven healthy volunteers were scanned whilst listening to a series of prerecorded words. The source (self/nonself) and acoustic quality (undistorted/distorted) of the speech was varied across trials. Participants indicated whether the words were spoken in their own or another person's voice via a button press. Listening to self-generated words was associated with more activation in the left inferior frontal and right anterior cingulate cortex than words in another person's voice, which was associated with greater engagement of the lateral temporal cortex bilaterally. Listening to distorted speech was associated with activation in the inferior frontal and anterior cingulate cortex. There was an interaction between the effects of source of speech and distortion on activation in the left temporal cortex. In the presence of distortion participants were more likely to misidentify their voice as that of another. This misattribution of self-generated speech was associated with reduced engagement of the cingulate and prefrontal cortices. The evaluation of auditory speech involves a network including the inferior frontal, anterior cingulate, and lateral temporal cortex. The degree to which different areas within this network are engaged varies with the source and acoustic quality of the speech. Accurate identification of one's own speech appears to depend on cingulate and prefrontal activity.  相似文献   

2.
Localization of cerebral activity during simple singing   总被引:1,自引:0,他引:1  
Cerebral blood flow (CBF) was measured with PET during rudimentary singing of a single pitch and vowel, contrasted to passive listening to complex tones. CBF increases in cortical areas related to motor control were seen in the supplementary motor area, anterior cingulate cortex, precentral gyri, anterior insula (and the adjacent inner face of the precentral operculum) and cerebellum, replicating most previously seen during speech. Increases in auditory cortex were seen within right Heschl's gyrus, and in the posterior superior temporal plane (and the immediately overlying parietal cortex). Since cortex near right Heschl's has been linked to complex pitch perception, its asymmetric activation here may be related to analyzing the fundamental frequency of one's own voice for feedback-guided modulation.  相似文献   

3.
Localization of cerebral activity during simple singing   总被引:7,自引:0,他引:7  
Cerebral blood flow (CBF) was measured with PET during rudimentary singing of a single pitch and vowel, contrasted to passive listening to complex tones. CBF increases in cortical areas related to motor control were seen in the supplementary motor area, anterior cingulate cortex, precentral gyri, anterior insula (and the adjacent inner face of the precentral operculum) and cerebellum, replicating most previously seen during speech. Increases in auditory cortex were seen within right Heschl's gyrus, and in the posterior superior temporal plane (and the immediately overlying parietal cortex). Since cortex near right Heschl's has been linked to complex pitch perception, its asymmetric activation here may be related to analyzing the fundamental frequency of one's own voice for feedback-guided modulation.  相似文献   

4.
Several studies report that patients with schizophrenia who experience auditory verbal hallucinations (AVH) tend to misidentify their own speech as that of somebody else. We tested the hypothesis that this tendency is associated with poor functional integration within the network of regions that mediate the evaluation of speech. Using functional magnetic resonance imaging, we measured brain responses from 11 schizophrenics with AVH, 10 schizophrenics without AVH, and 10 healthy controls. Stimuli comprised prerecorded words, which varied for their source (self, alien) and acoustic quality (undistorted, distorted). Participants had to indicate whether each word was spoken in their own or another person's voice via a button press. Using dynamic causal modeling, we estimated the impact of one region over another ("effective connectivity") and how this was modulated by source and distortion. In controls and in patients without AVH, the connectivity between left superior temporal and anterior cingulate cortex was significantly greater for alien- than for self-generated speech; in contrast, the reverse trend was found in schizophrenic patients with AVH. In conclusion, when patients with AVH appraise their own speech we find impaired functional integration between left superior temporal and anterior cingulate cortex. Although this finding is based on external rather than internal speech, the same mechanism may contribute to the faulty appraisal of inner speech that putatively underlies AVH.  相似文献   

5.
Learning to articulate novel combinations of phonemes that form new words through a small number of auditory exposures is crucial for development of language and our capacity for fluent speech, yet the underlying neural mechanisms are largely unknown. We used functional magnetic resonance imaging to reveal repetition-suppression effects accompanying such learning and reflecting discrete changes in brain activity due to stimulus-specific fine-tuning of neural representations. In an event-related design, subjects were repeatedly exposed to auditory pseudowords, which they covertly repeated. Covert responses during scanning and postscanning overt responses showed evidence of learning. An extensive set of regions activated bilaterally when listening to and covertly repeating novel pseudoword stimuli. Activity decreased, with repeated exposures, in a subset of these areas mostly in the left hemisphere, including premotor cortex, supplementary motor area, inferior frontal gyrus, superior temporal cortex, and cerebellum. The changes most likely reflect more efficient representation of the articulation patterns of these novel words in two connected systems, one involved in the perception of pseudoword stimuli (in the left superior temporal cortex) and one for processing the output of speech (in the left frontal cortex). Both of these systems contribute to vocal learning.  相似文献   

6.
Phonetic detail and lateralization of inner speech during covert sentence reading as well as overt reading in 32 right‐handed healthy participants undergoing 3T fMRI were investigated. The number of voiceless and voiced consonants in the processed sentences was systematically varied. Participants listened to sentences, read them covertly, silently mouthed them while reading, and read them overtly. Condition comparisons allowed for the study of effects of externally versus self‐generated auditory input and of somatosensory feedback related to or independent of voicing. In every condition, increased voicing modulated bilateral voice‐selective regions in the superior temporal sulcus without any lateralization. The enhanced temporal modulation and/or higher spectral frequencies of sentences rich in voiceless consonants induced left‐lateralized activation of phonological regions in the posterior temporal lobe, regardless of condition. These results provide evidence that inner speech during reading codes detail as fine as consonant voicing. Our findings suggest that the fronto‐temporal internal loops underlying inner speech target different temporal regions. These regions differ in their sensitivity to inner or overt acoustic speech features. More slowly varying acoustic parameters are represented more anteriorly and bilaterally in the temporal lobe while quickly changing acoustic features are processed in more posterior left temporal cortices. Furthermore, processing of external auditory feedback during overt sentence reading was sensitive to consonant voicing only in the left superior temporal cortex. Voicing did not modulate left‐lateralized processing of somatosensory feedback during articulation or bilateral motor processing. This suggests voicing is primarily monitored in the auditory rather than in the somatosensory feedback channel. Hum Brain Mapp 38:493–508, 2017. © 2016 Wiley Periodicals, Inc.  相似文献   

7.
Modulation of vocal pitch is a key speech feature that conveys important linguistic and affective information. Auditory feedback is used to monitor and maintain pitch. We examined induced neural high gamma power (HGP) (65–150 Hz) using magnetoencephalography during pitch feedback control. Participants phonated into a microphone while hearing their auditory feedback through headphones. During each phonation, a single real‐time 400 ms pitch shift was applied to the auditory feedback. Participants compensated by rapidly changing their pitch to oppose the pitch shifts. This behavioral change required coordination of the neural speech motor control network, including integration of auditory and somatosensory feedback to initiate change in motor plans. We found increases in HGP across both hemispheres within 200 ms of pitch shifts, covering left sensory and right premotor, parietal, temporal, and frontal regions, involved in sensory detection and processing of the pitch shift. Later responses to pitch shifts (200–300 ms) were right dominant, in parietal, frontal, and temporal regions. Timing of activity in these regions indicates their role in coordinating motor change and detecting and processing of the sensory consequences of this change. Subtracting out cortical responses during passive listening to recordings of the phonations isolated HGP increases specific to speech production, highlighting right parietal and premotor cortex, and left posterior temporal cortex involvement in the motor response. Correlation of HGP with behavioral compensation demonstrated right frontal region involvement in modulating participant's compensatory response. This study highlights the bihemispheric sensorimotor cortical network involvement in auditory feedback‐based control of vocal pitch. Hum Brain Mapp 37:1474‐1485, 2016. © 2016 Wiley Periodicals, Inc.  相似文献   

8.
Verbal fluency is a widely used neuropsychological paradigm. In fMRI implementations, conventional unpaced (self-paced) versions are suboptimal due to uncontrolled timing of responses, and overt responses carry the risk of motion artifact. We investigated the behavioral and neurofunctional effects of response pacing and overt speech in semantic category-driven word generation. Twelve right-handed adults (8 females), ages 21-37 were scanned in four conditions each: paced-overt, paced-covert, unpaced-overt, and unpaced-covert. There was no significant difference in the number of exemplars generated between overt versions of the paced and unpaced conditions. Imaging results for category-driven word generation overall showed left-hemispheric activation in inferior frontal cortex, premotor cortex, cingulate gyrus, thalamus, and basal ganglia. Direct comparison of generation modes revealed significantly greater activation for the paced compared to unpaced conditions in right superior temporal, bilateral middle frontal, and bilateral anterior cingulate cortex, including regions associated with sustained attention, motor planning, and response inhibition. Covert (compared to overt) conditions showed significantly greater effects in right parietal and anterior cingulate, as well as left middle temporal and superior frontal regions. We conclude that paced overt paradigms are useful adaptations of conventional semantic fluency in fMRI, given their superiority with regard to control over and monitoring of behavioral responses. However, response pacing is associated with additional non-linguistic effects related to response inhibition, motor preparation, and sustained attention.  相似文献   

9.
When a speaker's voice returns to one's own ears with a 200-ms delay, the delay causes the speaker to speak less fluently. This phenomenon is called a delayed auditory feedback (DAF) effect. To investigate neural mechanisms of speech processing through the DAF effect, we conducted a functional magnetic resonance imaging (fMRI) experiment, in which we designed a paradigm to explore the conscious overt-speech processing and the automatic overt-speech processing separately, while reducing articulatory motion artifacts. The subjects were instructed to (1) read aloud visually presented sentences under real-time auditory feedback (NORMAL), (2) read aloud rapidly under real-time auditory feedback (FAST), (3) read aloud slowly under real-time auditory feedback (SLOW), and (4) read aloud under DAF (DELAY). In the contrasts of DELAY-NORMAL, DELAY-FAST, and DELAY-SLOW, the bilateral superior temporal gyrus (STG), the supramarginal gyrus (SMG), and the middle temporal gyrus (MTG) showed significant activation. Moreover, we found that the STG activation was correlated with the degree of DAF effect for all subjects. Because the temporo-parietal regions did not show significant activation in the comparisons among NORMAL, FAST, and SLOW conditions, we can exclude the possibility that its activation is due to speech rates or enhanced attention to altered speech sounds. These results suggest that the temporo-parietal regions function as a conscious self-monitoring system to support an automatic speech production system.  相似文献   

10.
The left superior temporal cortex, which supports linguistic functions, has consistently been reported to activate during auditory-verbal hallucinations in schizophrenia patients. It has been suggested that auditory hallucinations and the processing of normal external speech compete for common neurophysiological resources. We tested the hypothesis of a negative relationship between the clinical severity of hallucinations and local brain activity in posterior linguistic regions while patients were listening to external speech. Fifteen right-handed patients with schizophrenia and daily auditory hallucinations for at least 3 months were studied with event-related fMRI while listening to sentences in French or to silence. Severity of hallucinations, assessed using the auditory hallucination subscales of the Psychotic Symptom Rating Scales (PSYRATS) and of the Scale for the Assessment of Positive Symptoms (SAPS-AH), negatively correlated with activation in the left temporal superior region in the French minus silence condition. This finding supports the hypothesis that auditory hallucinations compete with normal external speech for processing sites within the temporal cortex in schizophrenia.  相似文献   

11.
Musical training is associated with increased structural and functional connectivity between auditory sensory areas and higher-order brain networks involved in speech and motor processing. Whether such changed connectivity patterns facilitate the cortical propagation of speech information in musicians remains poorly understood. We here used magnetoencephalography (MEG) source imaging and a novel seed-based intersubject phase-locking approach to investigate the effects of musical training on the interregional synchronization of stimulus-driven neural responses during listening to naturalistic continuous speech presented in silence. MEG data were obtained from 20 young human subjects (both sexes) with different degrees of musical training. Our data show robust bilateral patterns of stimulus-driven interregional phase synchronization between auditory cortex and frontotemporal brain regions previously associated with speech processing. Stimulus-driven phase locking was maximal in the delta band, but was also observed in the theta and alpha bands. The individual duration of musical training was positively associated with the magnitude of stimulus-driven alpha-band phase locking between auditory cortex and parts of the dorsal and ventral auditory processing streams. These findings provide evidence for a positive relationship between musical training and the propagation of speech-related information between auditory sensory areas and higher-order processing networks, even when speech is presented in silence. We suggest that the increased synchronization of higher-order cortical regions to auditory cortex may contribute to the previously described musician advantage in processing speech in background noise.SIGNIFICANCE STATEMENT Musical training has been associated with widespread structural and functional brain plasticity. It has been suggested that these changes benefit the production and perception of music but can also translate to other domains of auditory processing, such as speech. We developed a new magnetoencephalography intersubject analysis approach to study the cortical synchronization of stimulus-driven neural responses during the perception of continuous natural speech and its relationship to individual musical training. Our results provide evidence that musical training is associated with higher synchronization of stimulus-driven activity between brain regions involved in early auditory sensory and higher-order processing. We suggest that the increased synchronized propagation of speech information may contribute to the previously described musician advantage in processing speech in background noise.  相似文献   

12.
Inner speech has been implicated in important aspects of normal and atypical cognition, including the development of auditory hallucinations. Studies to date have focused on covert speech elicited by simple word or sentence repetition, while ignoring richer and arguably more psychologically significant varieties of inner speech. This study compared neural activation for inner speech involving conversations (‘dialogic inner speech’) with single-speaker scenarios (‘monologic inner speech’). Inner speech-related activation differences were then compared with activations relating to Theory-of-Mind (ToM) reasoning and visual perspective-taking in a conjunction design. Generation of dialogic (compared with monologic) scenarios was associated with a widespread bilateral network including left and right superior temporal gyri, precuneus, posterior cingulate and left inferior and medial frontal gyri. Activation associated with dialogic scenarios and ToM reasoning overlapped in areas of right posterior temporal cortex previously linked to mental state representation. Implications for understanding verbal cognition in typical and atypical populations are discussed.  相似文献   

13.
The act of listening to speech activates a large network of brain areas. In the present work, a novel data‐driven technique (the combination of independent component analysis and Granger causality) was used to extract brain network dynamics from an fMRI study of passive listening to Words, Pseudo‐Words, and Reverse‐played words. Using this method we show the functional connectivity modulations among classical language regions (Broca's and Wernicke's areas) and inferior parietal, somatosensory, and motor areas and right cerebellum. Word listening elicited a compact pattern of connectivity within a parieto‐somato‐motor network and between the superior temporal and inferior frontal gyri. Pseudo‐Word stimuli induced activities similar to the Word condition, which were characterized by a highly recurrent connectivity pattern, mostly driven by the temporal lobe activity. Also the Reversed‐Word condition revealed an important influence of temporal cortices, but no integrated activity of the parieto‐somato‐motor network. In parallel, the right cerebellum lost its functional connection with motor areas, present in both Word and Pseudo‐Word listening. The inability of the participant to produce the Reversed‐Word stimuli also evidenced two separate networks: the first was driven by frontal areas and the right cerebellum toward somatosensory cortices; the second was triggered by temporal and parietal sites towards motor areas. Summing up, our results suggest that semantic content modulates the general compactness of network dynamics as well as the balance between frontal and temporal language areas in driving those dynamics. The degree of reproducibility of auditory speech material modulates the connectivity pattern within and toward somatosensory and motor areas. Hum Brain Mapp, 2010. © 2009 Wiley‐Liss, Inc.  相似文献   

14.
Observing a speaker's articulatory gestures can contribute considerably to auditory speech perception. At the level of neural events, seen articulatory gestures can modify auditory cortex responses to speech sounds and modulate auditory cortex activity also in the absence of heard speech. However, possible effects of attention on this modulation have remained unclear. To investigate the effect of attention on visual speech-induced auditory cortex activity, we scanned 10 healthy volunteers with functional magnetic resonance imaging (fMRI) at 3 T during simultaneous presentation of visual speech gestures and moving geometrical forms, with the instruction to either focus on or ignore the seen articulations. Secondary auditory cortex areas in the bilateral posterior superior temporal gyrus and planum temporale were active both when the articulatory gestures were ignored and when they were attended to. However, attention to visual speech gestures enhanced activity in the left planum temporale compared to the situation when the subjects saw identical stimuli but engaged in a nonspeech motion discrimination task. These findings suggest that attention to visually perceived speech gestures modulates auditory cortex function and that this modulation takes place at a hierarchically relatively early processing level.  相似文献   

15.
Research in auditory neuroscience has largely neglected the possible effects of different listening tasks on activations of auditory cortex (AC). In the present study, we used high‐resolution fMRI to compare human AC activations with sounds presented during three auditory and one visual task. In all tasks, subjects were presented with pairs of Finnish vowels, noise bursts with pitch and Gabor patches. In the vowel pairs, one vowel was always either a prototypical /i/ or /ae/ (separately defined for each subject) or a nonprototype. In different task blocks, subjects were either required to discriminate (same/different) vowel pairs, to rate vowel “goodness” (first/second sound was a better exemplar of the vowel class), to discriminate pitch changes in the noise bursts, or to discriminate Gabor orientation changes. We obtained distinctly different AC activation patterns to identical sounds presented during the four task conditions. In particular, direct comparisons between the vowel tasks revealed stronger activations during vowel discrimination in the anterior and posterior superior temporal gyrus (STG), while the vowel rating task was associated with increased activations in the inferior parietal lobule (IPL). We also found that AC areas in or near Heschl's gyrus (HG) were sensitive to the speech‐specific difference between a vowel prototype and nonprototype during active listening tasks. These results show that AC activations to speech sounds are strongly dependent on the listening tasks. Hum Brain Mapp, 2013. © 2012 Wiley Periodicals, Inc.  相似文献   

16.
The cerebellum has been implicated in the feedforward control of speech production. However, the role of the cerebellum in the feedback control of speech production remains unclear. To address this question, the present event‐related potential study examined the behavioral and neural correlates of auditory feedback control of vocal production in patients with spinocerebellar ataxia (SCA) and healthy controls. All participants were instructed to produce sustained vowels while hearing their voice unexpectedly pitch‐shifted ?200 or ?500 cents. The behavioral results revealed significantly larger vocal compensations for pitch perturbations in patients with SCA relative to healthy controls. At the cortical level, patients with SCA exhibited significantly smaller cortical P2 responses that were source localized in the right superior temporal gyrus, primary auditory cortex, and supramarginal gyrus than healthy controls. These findings indicate that reduced brain activity in the right temporal and parietal regions are significant neural contributors to abnormal auditory‐motor processing of vocal pitch regulation as a consequence of cerebellar degeneration, which may be related to disrupted reciprocal interactions between the cerebellum and cortical regions that support the top‐down modulation of auditory‐vocal integration. These differences in behavior and cortical activity between healthy controls and patients with SCA demonstrate that the cerebellum is not only essential for feedforward control but also plays a crucial role in the feedback‐based control of speech production.  相似文献   

17.
Functional MRI study of auditory and visual oddball tasks.   总被引:4,自引:0,他引:4  
To seek neural sources of endogenous event-related potentials, brain activations related to rare target stimuli detection in auditory and visual oddball tasks were imaged using a high temporal resolution functional MRI technique. There were multiple modality specific and modality non-specific activations. Auditory specific activations were seen in the bilateral transverse temporal gyri and posterior superior temporal planes while visual specific activations were seen in the bilateral occipital lobes and their junctions with the temporal lobes. Modality non-specific activations were seen in multiple areas including the bilateral parietal and temporal association areas, bilateral prefrontal cortex, bilateral premotor areas, bilateral supplementary motor areas and anterior cingulate gyrus. Results were consistent with previous intracranial evoked potential recording studies, and supported the multiple generator theory of the endogenous event-related potentials.  相似文献   

18.
During natural speech perception, humans must parse temporally continuous auditory and visual speech signals into sequences of words. However, most studies of speech perception present only single words or syllables. We used electrocorticography (subdural electrodes implanted on the brains of epileptic patients) to investigate the neural mechanisms for processing continuous audiovisual speech signals consisting of individual sentences. Using partial correlation analysis, we found that posterior superior temporal gyrus (pSTG) and medial occipital cortex tracked both the auditory and the visual speech envelopes. These same regions, as well as inferior temporal cortex, responded more strongly to a dynamic video of a talking face compared to auditory speech paired with a static face. Occipital cortex and pSTG carry temporal information about both auditory and visual speech dynamics. Visual speech tracking in pSTG may be a mechanism for enhancing perception of degraded auditory speech.  相似文献   

19.
Sensorimotor integration is important for motor learning. The inferior parietal lobe, through its connections with the frontal lobe and cerebellum, has been associated with multisensory integration and sensorimotor adaptation for motor behaviors other than speech. In the present study, the contribution of the inferior parietal cortex to speech motor learning was evaluated using repetitive transcranial magnetic stimulation (rTMS) prior to a speech motor adaptation task. Subjects' auditory feedback was altered in a manner consistent with the auditory consequences of an unintended change in tongue position during speech production, and adaptation performance was used to evaluate sensorimotor plasticity and short-term learning. Prior to the feedback alteration, rTMS or sham stimulation was applied over the left supramarginal gyrus (SMG). Subjects who underwent the sham stimulation exhibited a robust adaptive response to the feedback alteration whereas subjects who underwent rTMS exhibited a diminished adaptive response. The results suggest that the inferior parietal region, in and around SMG, plays a role in sensorimotor adaptation for speech. The interconnections of the inferior parietal cortex with inferior frontal cortex, cerebellum and primary sensory areas suggest that this region may be an important component in learning and adapting sensorimotor patterns for speech.  相似文献   

20.
We investigated the functional neuroanatomy of vowel processing. We compared attentive auditory perception of natural German vowels to perception of nonspeech band-passed noise stimuli using functional magnetic resonance imaging (fMRI). More specifically, the mapping in auditory cortex of first and second formants was considered, which spectrally characterize vowels and are linked closely to phonological features. Multiple exemplars of natural German vowels were presented in sequences alternating either mainly along the first formant (e.g., [u]-[o], [i]-[e]) or along the second formant (e.g., [u]-[i], [o]-[e]). In fixed-effects and random-effects analyses, vowel sequences elicited more activation than did nonspeech noise in the anterior superior temporal cortex (aST) bilaterally. Partial segregation of different vowel categories was observed within the activated regions, suggestive of a speech sound mapping across the cortical surface. Our results add to the growing evidence that speech sounds, as one of the behaviorally most relevant classes of auditory objects, are analyzed and categorized in aST. These findings also support the notion of an auditory "what" stream, with highly object-specialized areas anterior to primary auditory cortex.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号