首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Osnes B  Hugdahl K  Specht K 《NeuroImage》2011,54(3):2437-2445
Several reports of premotor cortex involvement in speech perception have been put forward. Still, the functional role of premotor cortex is under debate. In order to investigate the functional role of premotor cortex, we presented parametrically varied speech stimuli in both a behavioral and functional magnetic resonance imaging (fMRI) study. White noise was transformed over seven distinct steps into a speech sound and presented to the participants in a randomized order. As control condition served the same transformation from white noise into a music instrument sound. The fMRI data were modelled with Dynamic Causal Modeling (DCM) where the effective connectivity between Heschl's gyrus, planum temporale, superior temporal sulcus and premotor cortex were tested. The fMRI results revealed a graded increase in activation in the left superior temporal sulcus. Premotor cortex activity was only present at an intermediate step when the speech sounds became identifiable but were still distorted but was not present when the speech sounds were clearly perceivable. A Bayesian model selection procedure favored a model that contained significant interconnections between Heschl's gyrus, planum temporal, and superior temporal sulcus when processing speech sounds. In addition, bidirectional connections between premotor cortex and superior temporal sulcus and from planum temporale to premotor cortex were significant. Processing non-speech sounds initiated no significant connections to premotor cortex. Since the highest level of motor activity was observed only when processing identifiable sounds with incomplete phonological information, it is concluded that premotor cortex is not generally necessary for speech perception but may facilitate interpreting a sound as speech when the acoustic input is sparse.  相似文献   

2.
Traditionally, the left frontal and parietal lobes have been associated with language production while regions in the temporal lobe are seen as crucial for language comprehension. However, recent evidence suggests that the classical language areas constitute an integrated network where each area plays a crucial role both in speech production and perception. We used functional MRI to examine whether observing speech motor movements (without auditory speech) relative to non-speech motor movements preferentially activates the cortical speech areas. Furthermore, we tested whether the activation in these regions was modulated by task difficulty. This dissociates between areas that are actively involved with speech perception from regions that show an obligatory activation in response to speech movements (e.g. areas that automatically activate in preparation for a motoric response). Specifically, we hypothesized that regions involved with decoding oral speech would show increasing activation with increasing difficulty. We found that speech movements preferentially activate the frontal and temporal language areas. In contrast, non-speech movements preferentially activate the parietal region. Degraded speech stimuli increased both frontal and parietal lobe activity but did not differentially excite the temporal region. These findings suggest that the frontal language area plays a role in visual speech perception and highlight the differential roles of the classical speech and language areas in processing others' motor speech movements.  相似文献   

3.
4.
Somatosensory feedback plays a critical role in the coordination of articulator movements for speech production. In response to unexpected resistance to lip or jaw movements during speech, fluent speakers can use the difference between the somatosensory expectations of a speech sound and the actual somatosensory feedback to adjust the trajectories of functionally relevant but unimpeded articulators. In an effort to investigate the neural substrates underlying the somatosensory feedback control of speech, we used an event-related sparse sampling functional magnetic resonance imaging paradigm and a novel pneumatic device that unpredictably blocked subjects' jaw movements. In comparison to speech, perturbed speech, in which jaw perturbation prompted the generation of compensatory speech motor commands, demonstrated increased effects in bilateral ventral motor cortex, right-lateralized anterior supramarginal gyrus, inferior frontal gyrus pars triangularis and ventral premotor cortex, and bilateral inferior posterior cerebellum (lobule VIII). Structural equation modeling revealed a significant increased influence from left anterior supramarginal gyrus to right anterior supramarginal gyrus and from left anterior supramarginal gyrus to right ventral premotor cortex as well as a significant increased reciprocal influence between right ventral premotor cortex and right ventral motor cortex and right anterior supramarginal gyrus and right inferior frontal gyrus pars triangularis for perturbed speech relative to speech. These results suggest that bilateral anterior supramarginal gyrus, right inferior frontal gyrus pars triangularis, right ventral premotor and motor cortices are functionally coupled and influence speech motor output when somatosensory feedback is unexpectedly perturbed during speech production.  相似文献   

5.
Kang E  Lee DS  Kang H  Hwang CH  Oh SH  Kim CS  Chung JK  Lee MC 《NeuroImage》2006,32(1):423-431
Speech perception in face-to-face conversation involves processing of speech sounds (auditory) and speech-associated mouth/lip movements (visual) from a speaker. Using PET where no scanner noise was present, brain regions involved in speech cue processing were investigated with the normal hearing subjects with no previous lip-reading training (N = 17) carrying out a semantic plausibility decision on spoken sentences delivered in a movie file. Multimodality was ensured at the sensory level in all four conditions. Sensory-specific speech cue of one sensory modality, i.e., auditory speech (A condition) or mouth movement (V condition), was delivered with a control stimulus of the other modality whereas speech cues of both sensory modalities (AV condition) were delivered during bimodal condition. In comparison to the control condition, extensive activations in the superior temporal regions were observed bilaterally during the A condition but these activations were reduced in extent and left lateralized during the AV condition. Polymodal region such as left posterior superior temporal sulcus (pSTS) involved in cross-modal interaction/integration of audiovisual speech was found to be activated during the A and more so during the AV conditions but not during the V condition. Activations were observed in Broca's (BA 44), medial frontal (BA 8), and anterior ventrolateral prefrontal (BA 47) regions in the left during the V condition, where lip-reading performance was less successful. Results indicated that the speech-associated lip movements (visual speech cue) rendered suppression on the activity in the right auditory temporal regions. Overadditivity (AV > A + V) observed in the right postcentral region during the bimodal condition relative to the sum of unimodal speech conditions was also associated with reduced activity during the V condition. These findings suggested that visual speech cue could exert an inhibitory modulatory effect on the brain activities in the right hemisphere during the cross-modal interaction of audiovisual speech perception.  相似文献   

6.
Building a motor simulation de novo: observation of dance by dancers   总被引:5,自引:0,他引:5  
Cross ES  Hamilton AF  Grafton ST 《NeuroImage》2006,31(3):1257-1267
Research on action simulation identifies brain areas that are active while imagining or performing simple overlearned actions. Are areas engaged during imagined movement sensitive to the amount of actual physical practice? In the present study, participants were expert dancers who learned and rehearsed novel, complex whole-body dance sequences 5 h a week across 5 weeks. Brain activity was recorded weekly by fMRI as dancers observed and imagined performing different movement sequences. Half these sequences were rehearsed and half were unpracticed control movements. After each trial, participants rated how well they could perform the movement. We hypothesized that activity in premotor areas would increase as participants observed and simulated movements that they had learnt outside the scanner. Dancers' ratings of their ability to perform rehearsed sequences, but not the control sequences, increased with training. When dancers observed and simulated another dancer's movements, brain regions classically associated with both action simulation and action observation were active, including inferior parietal lobule, cingulate and supplementary motor areas, ventral premotor cortex, superior temporal sulcus and primary motor cortex. Critically, inferior parietal lobule and ventral premotor activity was modulated as a function of dancers' ratings of their own ability to perform the observed movements and their motor experience. These data demonstrate that a complex motor resonance can be built de novo over 5 weeks of rehearsal. Furthermore, activity in premotor and parietal areas during action simulation is enhanced by the ability to execute a learned action irrespective of stimulus familiarity or semantic label.  相似文献   

7.
This 3-T fMRI study investigates brain regions similarly and differentially involved with listening and covert production of singing relative to speech. Given the greater use of auditory-motor self-monitoring and imagery with respect to consonance in singing, brain regions involved with these processes are predicted to be differentially active for singing more than for speech. The stimuli consisted of six Japanese songs. A block design was employed in which the tasks for the subject were to listen passively to singing of the song lyrics, passively listen to speaking of the song lyrics, covertly sing the song lyrics visually presented, covertly speak the song lyrics visually presented, and to rest. The conjunction of passive listening and covert production tasks used in this study allow for general neural processes underlying both perception and production to be discerned that are not exclusively a result of stimulus induced auditory processing nor to low level articulatory motor control. Brain regions involved with both perception and production for singing as well as speech were found to include the left planum temporale/superior temporal parietal region, as well as left and right premotor cortex, lateral aspect of the VI lobule of posterior cerebellum, anterior superior temporal gyrus, and planum polare. Greater activity for the singing over the speech condition for both the listening and covert production tasks was found in the right planum temporale. Greater activity in brain regions involved with consonance, orbitofrontal cortex (listening task), subcallosal cingulate (covert production task) were also present for singing over speech. The results are consistent with the PT mediating representational transformation across auditory and motor domains in response to consonance for singing over that of speech. Hemispheric laterality was assessed by paired t tests between active voxels in the contrast of interest relative to the left-right flipped contrast of interest calculated from images normalized to the left-right reflected template. Consistent with some hypotheses regarding hemispheric specialization, a pattern of differential laterality for speech over singing (both covert production and listening tasks) occurs in the left temporal lobe, whereas, singing over speech (listening task only) occurs in right temporal lobe.  相似文献   

8.
Activation of premotor cortex during the observation and imitation of human actions is now increasingly accepted, but it remains unclear how the CNS is able to resolve potential conflicts between the observation of another person's action and the ongoing control of one's own action. Recent data suggest that this overlap leads to a systematic bias, where lifting a box influences participant's perceptual judgments of the weight of a box lifted by another person. We now investigate the neural basis of this bias effect using fMRI. Seventeen participants performed a perceptual weight judgment task or two control conditions while lifting a light box, a heavy box or no box during scanning. Brain regions related to perceptual bias were localized by correlating individual differences in bias with BOLD signal. Five regions were found to show correlations with psychophysical bias: left inferior frontal gyrus, left central sulcus, left extrastriate body area, left lingual gyrus and right intraparietal sulcus. The cluster in primary motor cortex was also activated by box lifting, and the cluster in extrastriate body area by the observation of hand actions and the weight judgment task. We suggest that these brain areas are part of a network where motor processing modulates perceptual judgment of observed human actions, and thus visual and motor processes cannot be thought of as two distinct systems, but instead interact at many levels.  相似文献   

9.
In everyday life, people select motor responses according to arbitrary rules. For example, our movements while driving a car can be instructed by color cues that we see on traffic lights. These stimuli do not spatially relate to the actions that they specify. Associations between these stimuli and actions are called arbitrary visuomotor conditional associations. Earlier fMRI studies have tried to dissociate the sensory and motor components of these associations by introducing delays between the presentation of arbitrary cues and go-signals that instructed participants to perform actions. This approach, however, also introduces neural processes that are not necessarily related to the normal real-time production of arbitrary visuomotor responses, such as working memory and the suppression of motor responses. We used fMRI adaptation as an alternative approach to dissociate sensory and motor components. We found that visual areas in the occipital–temporal cortex adapted only to the presentation of arbitrary visual cues whereas a number of sensorimotor areas adapted only to the production of response. Visual areas in the occipital–temporal cortex do not have any known connections with parts of the brain that can control hand musculature. Therefore, it is conceivable that the brain areas that we report as having adapted to both stimulus presentation and response production (namely, the dorsal premotor area, the supplementary motor area, the cingulate, the anterior intra-parietal sulcus area, and the thalamus) are involved in the multiple steps between processing visual stimuli and activating the motor commands that these cues specify.  相似文献   

10.
Speech perception can use not only auditory signals, but also visual information from seeing the speaker's mouth. The relative timing and relative location of auditory and visual inputs are both known to influence crossmodal integration psychologically, but previous imaging studies of audiovisual speech focused primarily on just temporal aspects. Here we used Positron Emission Tomography (PET) during audiovisual speech processing to study how temporal and spatial factors might jointly affect brain activations. In agreement with previous work, synchronous versus asynchronous audiovisual speech yielded increased activity in multisensory association areas (e.g., superior temporal sulcus [STS]), plus in some unimodal visual areas. Our orthogonal manipulation of relative stimulus position (auditory and visual stimuli presented at same location vs. opposite sides) and stimulus synchrony showed that (i) ventral occipital areas and superior temporal sulcus were unaffected by relative location; (ii) lateral and dorsal occipital areas were selectively activated for synchronous bimodal stimulation at the same external location; (iii) right inferior parietal lobule was activated for synchronous auditory and visual stimuli at different locations, that is, in the condition classically associated with the 'ventriloquism effect' (shift of perceived auditory position toward the visual location). Thus, different brain regions are involved in different aspects of audiovisual integration. While ventral areas appear more affected by audiovisual synchrony (which can influence speech identification), more dorsal areas appear to be associated with spatial multisensory interactions.  相似文献   

11.
Little is known about the ability to enumerate small numbers of successive stimuli and movements. It is possible that there exist neural substrates that are consistently recruited both to count sensory stimuli from different modalities and for counting movements executed by different effectors. Here, we identify a network of areas that was involved in enumerating small numbers of auditory, visual, and somatosensory stimuli, and in enumerating sequential movements of hands and feet, in the bilateral premotor cortex, presupplementary motor area, posterior temporal cortex, and thalamus. The most significant consistent activation across sensory and motor counting conditions was found in the lateral premotor cortex. Lateral premotor activation was not dependent on movement preparation, stimulus presentation timing, or number word verbalization. Movement counting, but not sensory counting, activated the anterior parietal cortex. This anterior parietal area may correspond to an area recruited for movement counting identified by recent single-neuron studies in monkeys. These results suggest that overlapping but not identical networks of areas are involved in counting sequences of sensory stimuli and sequences of movements in the human brain.  相似文献   

12.
An fMRI investigation of syllable sequence production   总被引:2,自引:0,他引:2  
Bohland JW  Guenther FH 《NeuroImage》2006,32(2):821-841
Fluent speech comprises sequences that are composed from a finite alphabet of learned words, syllables, and phonemes. The sequencing of discrete motor behaviors has received much attention in the motor control literature, but relatively little has been focused directly on speech production. In this paper, we investigate the cortical and subcortical regions involved in organizing and enacting sequences of simple speech sounds. Sparse event-triggered functional magnetic resonance imaging (fMRI) was used to measure responses to preparation and overt production of non-lexical three-syllable utterances, parameterized by two factors: syllable complexity and sequence complexity. The comparison of overt production trials to preparation only trials revealed a network related to the initiation of a speech plan, control of the articulators, and to hearing one's own voice. This network included the primary motor and somatosensory cortices, auditory cortical areas, supplementary motor area (SMA), the precentral gyrus of the insula, and portions of the thalamus, basal ganglia, and cerebellum. Additional stimulus complexity led to increased engagement of the basic speech network and recruitment of additional areas known to be involved in sequencing non-speech motor acts. In particular, the left hemisphere inferior frontal sulcus and posterior parietal cortex, and bilateral regions at the junction of the anterior insula and frontal operculum, the SMA and pre-SMA, the basal ganglia, anterior thalamus, and the cerebellum showed increased activity for more complex stimuli. We hypothesize mechanistic roles for the extended speech production network in the organization and execution of sequences of speech sounds.  相似文献   

13.
Mirror neurons in the monkey's premotor cortex respond during both execution and observation of actions and are thought to be critical for understanding others' actions. Human studies have shown premotor cortex activation while viewing actions, hearing their sounds, listening to or reading action-related sentences, and have compared execution and observation of similar actions. However, we still lack direct evidence in humans of the most striking and theoretically relevant feature of mirror neurons, i.e., that they map seen/heard actions onto motor representations of the same actions at an abstract level. Here we combine fast event-related functional magnetic resonance imaging with an unconscious semantic priming paradigm and show that the human auditory mirror system also holds an abstract representation of the meaning of heard actions. We analyzed the effect on brain activity of trial-by-trial semantic congruency between a target sound denoting a hand or mouth action (or an environmental event) and a briefly flashed written word acting as an unconscious cross-modal prime. Left inferior frontal and posterior temporal regions selectively responded to action sounds in a non-somatotopic fashion and were modulated by semantic congruency only in action sound trials. We also observed regions selective for either hand or mouth actions, which however did not show a corresponding effector-specific effect of semantic congruency. These results provide evidence that the human mirror system represents the meaning of actions (but not of other events) (a) at an abstract, semantic level, (b) independently of the effector, and (c) independently of conscious awareness.  相似文献   

14.
Male and female voices activate distinct regions in the male brain   总被引:1,自引:0,他引:1  
In schizophrenia, auditory verbal hallucinations (AVHs) are likely to be perceived as gender-specific. Given that functional neuro-imaging correlates of AVHs involve multiple brain regions principally including auditory cortex, it is likely that those brain regions responsible for attribution of gender to speech are invoked during AVHs. We used functional magnetic resonance imaging (fMRI) and a paradigm utilising 'gender-apparent' (unaltered) and 'gender-ambiguous' (pitch-scaled) male and female voice stimuli to test the hypothesis that male and female voices activate distinct brain areas during gender attribution. The perception of female voices, when compared with male voices, affected greater activation of the right anterior superior temporal gyrus, near the superior temporal sulcus. Similarly, male voice perception activated the mesio-parietal precuneus area. These different gender associations could not be explained by either simple pitch perception or behavioural response because the activations that we observed were conjointly activated by both 'gender-apparent' and 'gender-ambiguous' voices. The results of this study demonstrate that, in the male brain, the perception of male and female voices activates distinct brain regions.  相似文献   

15.
In modern perceptual neuroscience, the focus of interest has shifted from a restriction to individual modalities to an acknowledgement of the importance of multisensory processing. One particularly well-known example of cross-modal interaction is the McGurk illusion. It has been shown that this illusion can be modified, such that it creates an auditory perceptual bias that lasts beyond the duration of audiovisual stimulation, a process referred to as cross-modal recalibration (Bertelson et al., 2003). Recently, we have suggested that this perceptual bias is stored in auditory cortex, by demonstrating the feasibility of retrieving the subjective perceptual interpretation of recalibrated ambiguous phonemes from functional magnetic resonance imaging (fMRI) measurements in these regions (Kilian-Hütten et al., 2011). However, this does not explain which brain areas integrate the information from the two senses and represent the origin of the auditory perceptual bias. Here we analyzed fMRI data from audiovisual recalibration blocks, utilizing behavioral data from perceptual classifications of ambiguous auditory phonemes that followed these blocks later in time. Adhering to this logic, we could identify a network of brain areas (bilateral inferior parietal lobe [IPL], inferior frontal sulcus [IFS], and posterior middle temporal gyrus [MTG]), whose activation during audiovisual exposure anticipated auditory perceptual tendencies later in time. We propose a model in which a higher-order network, including IPL and IFS, accommodates audiovisual integrative learning processes, which are responsible for the installation of a perceptual bias in auditory regions. This bias then determines constructive perceptual processing.  相似文献   

16.
Kansaku K  Hanakawa T  Wu T  Hallett M 《NeuroImage》2004,22(2):904-911
Simple reaction time, a simple model of sensory-to-motor behavior, has been extensively investigated and its role in inferring elementary mental organization has been postulated. However, little is known about the neuronal mechanisms underlying it. To elucidate the neuronal substrates, functional magnetic resonance imaging (fMRI) signals were collected during a simple reaction task paradigm using simple cues consisting of different modalities and simple triggered movements executed by different effectors. We hypothesized that a specific neural network that characterizes simple reaction time would be activated irrespective of the input modalities and output effectors. Such a neural network was found in the right posterior superior temporal cortex, right premotor cortex, left ventral premotor cortex, cerebellar vermis, and medial frontal gyrus. The right posterior superior temporal cortex and right premotor cortex were also activated by different modality sensory cues in the absence of movements. The shared neural network may play a role in sensory triggered movements.  相似文献   

17.
The primary symptom of fibromyalgia (FM) is chronic, widespread pain; however, patients report additional symptoms including decreased concentration and memory. Performance-based deficits are seen mainly in tests of working memory and executive function. Neural correlates of executive function were investigated in 18 FM patients and 14 age-matched healthy controls during a simple Go/No-Go task (response inhibition) while they underwent functional magnetic resonance imaging (fMRI). Performance was not different between FM and healthy control, in either reaction time or accuracy. However, fMRI revealed that FM patients had lower activation in the right premotor cortex, supplementary motor area, midcingulate cortex, putamen and, after controlling for anxiety, in the right insular cortex and right inferior frontal gyrus. A hyperactivation in FM patients was seen in the right inferior temporal gyrus/fusiform gyrus. Despite the same reaction times and accuracy, FM patients show less brain activation in cortical structures in the inhibition network (specifically in areas involved in response selection/motor preparation) and the attention network along with increased activation in brain areas not normally part of the inhibition network. We hypothesize that response inhibition and pain perception may rely on partially overlapping networks, and that in chronic pain patients, resources taken up by pain processing may not be available for executive functioning tasks such as response inhibition. Compensatory cortical plasticity may be required to achieve performance on a par with control groups.

Perspective

Neural activation (fMRI) during response inhibition was measured in fibromyalgia patients and controls. FM patients show lower activation in the inhibition and attention networks and increased activation in other areas. Inhibition and pain perception may use overlapping networks: resources taken up by pain processing may be unavailable for other processes.  相似文献   

18.
We used functional magnetic resonance imaging (fMRI) to localize the brain areas involved in the imagery analogue of the verbal transformation effect, that is, the perceptual changes that occur when a speech form is cycled in rapid and continuous mental repetition. Two conditions were contrasted: a baseline condition involving the simple mental repetition of speech sequences, and a verbal transformation condition involving the mental repetition of the same items with an active search for verbal transformation. Our results reveal a predominantly left-lateralized network of cerebral regions activated by the verbal transformation task, similar to the neural network involved in verbal working memory: the left inferior frontal gyrus, the left supramarginal gyrus, the left superior temporal gyrus, the anterior part of the right cingulate cortex, and the cerebellar cortex, bilaterally. Our results strongly suggest that the imagery analogue of the verbal transformation effect, which requires percept analysis, form interpretation, and attentional maintenance of verbal material, relies on a working memory module sharing common components of speech perception and speech production systems.  相似文献   

19.
Tremblay P  Small SL 《NeuroImage》2011,57(4):1561-1571
What is the nature of the interface between speech perception and production, where auditory and motor representations converge? One set of explanations suggests that during perception, the motor circuits involved in producing a perceived action are in some way enacting the action without actually causing movement (covert simulation) or sending along the motor information to be used to predict its sensory consequences (i.e., efference copy). Other accounts either reject entirely the involvement of motor representations in perception, or explain their role as being more supportive than integral, and not employing the identical circuits used in production. Using fMRI, we investigated whether there are brain regions that are conjointly active for both speech perception and production, and whether these regions are sensitive to articulatory (syllabic) complexity during both processes, which is predicted by a covert simulation account. A group of healthy young adults (1) observed a female speaker produce a set of familiar words (perception), and (2) observed and then repeated the words (production). There were two types of words, varying in articulatory complexity, as measured by the presence or absence of consonant clusters. The simple words contained no consonant cluster (e.g. "palace"), while the complex words contained one to three consonant clusters (e.g. "planet"). Results indicate that the left ventral premotor cortex (PMv) was significantly active during speech perception and speech production but that activation in this region was scaled to articulatory complexity only during speech production, revealing an incompletely specified efferent motor signal during speech perception. The right planum temporal (PT) was also active during speech perception and speech production, and activation in this region was scaled to articulatory complexity during both production and perception. These findings are discussed in the context of current theories of speech perception, with particular attention to accounts that include an explanatory role for mirror neurons.  相似文献   

20.
It is generally held that motor imagery is the internal simulation of movements involving one's own body in the absence of overt execution. Consistent with this hypothesis, results from numerous functional neuroimaging studies indicate that motor imagery activates a large variety of motor-related brain regions. However, it is unclear precisely which of these areas are involved in motor imagery per se as opposed to other planning processes that do not involve movement simulation. In an attempt to resolve this issue, we employed event-related fMRI to separate activations related to hand preparation-a task component that does not demand imagining movements-from grip selection-a component previously shown to require the internal simulation of reaching movements. Our results show that in contrast to preparation of overt actions, preparation of either hand for covert movement simulation activates a large network of motor-related areas located primarily within the left cerebral and right cerebellar hemispheres. By contrast, imagined grip selection activates a distinct parietofrontal circuit that includes the bilateral dorsal premotor cortex, contralateral intraparietal sulcus, and right superior parietal lobule. Because these areas are highly consistent with the frontoparietal reach circuit identified in monkeys, we conclude that motor imagery involves action-specific motor representations computed in parietofrontal circuits.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号