首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The ability to internally simulate other persons' actions is important for social interaction. In monkeys, neurons in the premotor cortex are activated both when the monkey performs mouth or hand actions and when it views or listens to actions made by others. Neuronal circuits with similar "mirror-neuron" properties probably exist in the human Broca's area and primary motor cortex. Viewing other person's hand actions also modulates activity in the primary somatosensory cortex SI, suggesting that the SI cortex is related to the human mirror-neuron system. To study the selectivity of the SI activation during action viewing, we stimulated the lower lip (with tactile pulses) and the median nerves (with electric pulses) in eight subjects to activate their SI mouth and hand cortices while the subjects either rested, listened to other person's speech, viewed her articulatory gestures, or executed mouth movements. The 55-ms SI responses to lip stimuli were enhanced by 16% (P<0.01) in the left hemisphere during speech viewing whereas listening to speech did not modulate these responses. The 35-ms responses to median-nerve stimulation remained stable during speech viewing and listening. Own mouth movements suppressed responses to lip stimuli bilaterally by 74% (P<0.001), without any effect on responses to median-nerve stimuli. Our findings show that viewing another person's articulatory gestures activates the left SI cortex in a somatotopic manner. The results provide further evidence for the view that SI is involved in "mirroring" of other persons' actions.  相似文献   

2.
Converging evidence suggests that the left superior temporal sulcus (STS) is a critical site for multisensory integration of auditory and visual information during speech perception. We report a patient, SJ, who suffered a stroke that damaged the left tempo-parietal area, resulting in mild anomic aphasia. Structural MRI showed complete destruction of the left middle and posterior STS, as well as damage to adjacent areas in the temporal and parietal lobes. Surprisingly, SJ demonstrated preserved multisensory integration measured with two independent tests. First, she perceived the McGurk effect, an illusion that requires integration of auditory and visual speech. Second, her perception of morphed audiovisual speech with ambiguous auditory or visual information was significantly influenced by the opposing modality. To understand the neural basis for this preserved multisensory integration, blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI) was used to examine brain responses to audiovisual speech in SJ and 23 healthy age-matched controls. In controls, bilateral STS activity was observed. In SJ, no activity was observed in the damaged left STS but in the right STS, more cortex was active in SJ than in any of the normal controls. Further, the amplitude of the BOLD response in right STS response to McGurk stimuli was significantly greater in SJ than in controls. The simplest explanation of these results is a reorganization of SJ's cortical language networks such that the right STS now subserves multisensory integration of speech.  相似文献   

3.
Cohen L  Jobert A  Le Bihan D  Dehaene S 《NeuroImage》2004,23(4):1256-1270
How are word recognition circuits organized in the left temporal lobe? We used functional magnetic resonance imaging (fMRI) to dissect cortical word-processing circuits using three diagnostic criteria: the capacity of an area (1) to respond to words in a single modality (visual or auditory) or in both modalities, (2) to modulate its response in a top-down manner as a function of the graphemic or phonemic emphasis of the task, and (3) to show repetition suppression in response to the conscious repetition of the target word within the same sensory modality or across different modalities. The results clarify the organization of visual and auditory word-processing streams. In particular, the visual word form area (VWFA) in the left occipitotemporal sulcus appears strictly as a visual unimodal area. It is, however, bordered by a second lateral inferotemporal area which is multimodal [lateral inferotemporal multimodal area (LIMA)]. Both areas might have been confounded in past work. Our results also suggest a possible homolog of the VWFA in the auditory stream, the auditory word form area, located in the left anterior superior temporal sulcus.  相似文献   

4.
Iconic gestures are spontaneous hand movements that illustrate certain contents of speech and, as such, are an important part of face-to-face communication. This experiment targets the brain bases of how iconic gestures and speech are integrated during comprehension. Areas of integration were identified on the basis of two classic properties of multimodal integration, bimodal enhancement and inverse effectiveness (i.e., greater enhancement for unimodally least effective stimuli). Participants underwent fMRI while being presented with videos of gesture-supported sentences as well as their unimodal components, which allowed us to identify areas showing bimodal enhancement. Additionally, we manipulated the signal-to-noise ratio of speech (either moderate or good) to probe for integration areas exhibiting the inverse effectiveness property. Bimodal enhancement was found at the posterior end of the superior temporal sulcus and adjacent superior temporal gyrus (pSTS/STG) in both hemispheres, indicating that the integration of iconic gestures and speech takes place in these areas. Furthermore, we found that the left pSTS/STG specifically showed a pattern of inverse effectiveness, i.e., the neural enhancement for bimodal stimulation was greater under adverse listening conditions. This indicates that activity in this area is boosted when an iconic gesture accompanies an utterance that is otherwise difficult to comprehend. The neural response paralleled the behavioral data observed. The present data extends results from previous gesture–speech integration studies in showing that pSTS/STG plays a key role in the facilitation of speech comprehension through simultaneous gestural input.  相似文献   

5.
Speech perception can use not only auditory signals, but also visual information from seeing the speaker's mouth. The relative timing and relative location of auditory and visual inputs are both known to influence crossmodal integration psychologically, but previous imaging studies of audiovisual speech focused primarily on just temporal aspects. Here we used Positron Emission Tomography (PET) during audiovisual speech processing to study how temporal and spatial factors might jointly affect brain activations. In agreement with previous work, synchronous versus asynchronous audiovisual speech yielded increased activity in multisensory association areas (e.g., superior temporal sulcus [STS]), plus in some unimodal visual areas. Our orthogonal manipulation of relative stimulus position (auditory and visual stimuli presented at same location vs. opposite sides) and stimulus synchrony showed that (i) ventral occipital areas and superior temporal sulcus were unaffected by relative location; (ii) lateral and dorsal occipital areas were selectively activated for synchronous bimodal stimulation at the same external location; (iii) right inferior parietal lobule was activated for synchronous auditory and visual stimuli at different locations, that is, in the condition classically associated with the 'ventriloquism effect' (shift of perceived auditory position toward the visual location). Thus, different brain regions are involved in different aspects of audiovisual integration. While ventral areas appear more affected by audiovisual synchrony (which can influence speech identification), more dorsal areas appear to be associated with spatial multisensory interactions.  相似文献   

6.
It is widely accepted that dorsolateral prefrontal cortex (DLPFC) is activated at the time of action generation in humans. However, the previous functional neuroimaging studies that have supported this hypothesis temporally integrated brain dynamics and therefore could not demonstrate when DLPFC underwent activation relative to the emergence of voluntary behavior. Data that are time-locked to the instant of voluntary action execution do not reveal DLPFC activation at that moment. Rather, activated foci are seen at the frontal poles. We investigated this apparent conundrum through three differentially constrained experiments, utilizing functional magnetic resonance imaging to identify those prefrontal areas exhibiting functional change at the moment of spontaneous action execution. We observed profound functional dissociation between anterior and dorsolateral regions, compatible with their involvement at different points during the temporal evolution of action: bilaterally the frontal poles activated at the moment of execution, while simultaneously (and relative to a prior activation state) left DLPFC 'deactivated.'  相似文献   

7.
Several studies indicate that both posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG) and left inferior frontal gyrus (LIFG) are involved in integrating information from different modalities. Here we investigated the respective roles of these two areas in integration of action and language information. We exploited the fact that the semantic relationship between language and different forms of action (i.e. co-speech gestures and pantomimes) is radically different. Speech and co-speech gestures are always produced together, and gestures are not unambiguously understood without speech. On the contrary, pantomimes are not necessarily produced together with speech and can be easily understood without speech. We presented speech together with these two types of communicative hand actions in matching or mismatching combinations to manipulate semantic integration load. Left and right pSTS/MTG were only involved in semantic integration of speech and pantomimes. Left IFG on the other hand was involved in integration of speech and co-speech gestures as well as of speech and pantomimes. Effective connectivity analyses showed that depending upon the semantic relationship between language and action, LIFG modulates activation levels in left pSTS.This suggests that integration in pSTS/MTG involves the matching of two input streams for which there is a relatively stable common object representation, whereas integration in LIFG is better characterized as the on-line construction of a new and unified representation of the input streams. In conclusion, pSTS/MTG and LIFG are differentially involved in multimodal integration, crucially depending upon the semantic relationship between the input streams.  相似文献   

8.
Osnes B  Hugdahl K  Specht K 《NeuroImage》2011,54(3):2437-2445
Several reports of premotor cortex involvement in speech perception have been put forward. Still, the functional role of premotor cortex is under debate. In order to investigate the functional role of premotor cortex, we presented parametrically varied speech stimuli in both a behavioral and functional magnetic resonance imaging (fMRI) study. White noise was transformed over seven distinct steps into a speech sound and presented to the participants in a randomized order. As control condition served the same transformation from white noise into a music instrument sound. The fMRI data were modelled with Dynamic Causal Modeling (DCM) where the effective connectivity between Heschl's gyrus, planum temporale, superior temporal sulcus and premotor cortex were tested. The fMRI results revealed a graded increase in activation in the left superior temporal sulcus. Premotor cortex activity was only present at an intermediate step when the speech sounds became identifiable but were still distorted but was not present when the speech sounds were clearly perceivable. A Bayesian model selection procedure favored a model that contained significant interconnections between Heschl's gyrus, planum temporal, and superior temporal sulcus when processing speech sounds. In addition, bidirectional connections between premotor cortex and superior temporal sulcus and from planum temporale to premotor cortex were significant. Processing non-speech sounds initiated no significant connections to premotor cortex. Since the highest level of motor activity was observed only when processing identifiable sounds with incomplete phonological information, it is concluded that premotor cortex is not generally necessary for speech perception but may facilitate interpreting a sound as speech when the acoustic input is sparse.  相似文献   

9.
Classic identity negative priming (NP) refers to the finding that when an object is ignored, subsequent naming responses to it are slower than when it has not been previously ignored (Tipper, S.P., 1985. The negative priming effect: inhibitory priming by ignored objects. Q. J. Exp. Psychol. 37A, 571-590). It is unclear whether this phenomenon arises due to the involvement of abstract semantic representations that the ignored object accesses automatically. Contemporary connectionist models propose a key role for the anterior temporal cortex in the representation of abstract semantic knowledge (e.g., McClelland, J.L., Rogers, T.T., 2003. The parallel distributed processing approach to semantic cognition. Nat. Rev. Neurosci. 4, 310-322), suggesting that this region should be involved during performance of the classic identity NP task if it involves semantic access. Using high-field (4 T) event-related functional magnetic resonance imaging, we observed increased BOLD responses in the left anterolateral temporal cortex including the temporal pole that was directly related to the magnitude of each individual's NP effect, supporting a semantic locus. Additional signal increases were observed in the supplementary eye fields (SEF) and left inferior parietal lobule (IPL).  相似文献   

10.
We investigated the brain regions that mediate the processing of emotional speech in men and women by presenting positive and negative words that were spoken with happy or angry prosody. Hence, emotional prosody and word valence were either congruous or incongruous. We assumed that an fRMI contrast between congruous and incongruous presentations would reveal the structures that mediate the interaction of emotional prosody and word valence. The left inferior frontal gyrus (IFG) was more strongly activated in incongruous as compared to congruous trials. This difference in IFG activity was significantly larger in women than in men. Moreover, the congruence effect was significant in women whereas it only appeared as a tendency in men. As the left IFG has been repeatedly implicated in semantic processing, these findings are taken as evidence that semantic processing in women is more susceptible to influences from emotional prosody than is semantic processing in men. Moreover, the present data suggest that the left IFG mediates increased semantic processing demands imposed by an incongruence between emotional prosody and word valence.  相似文献   

11.
The left superior temporal cortex shows greater responsiveness to speech than to non-speech sounds according to previous neuroimaging studies, suggesting that this brain region has a special role in speech processing. However, since speech sounds differ acoustically from the non-speech sounds, it is possible that this region is not involved in speech perception per se, but rather in processing of some complex acoustic features. "Sine wave speech" (SWS) provides a tool to study neural speech specificity using identical acoustic stimuli, which can be perceived either as speech or non-speech, depending on previous experience of the stimuli. We scanned 21 subjects using 3T functional MRI in two sessions, both including SWS and control stimuli. In the pre-training session, all subjects perceived the SWS stimuli as non-speech. In the post-training session, the identical stimuli were perceived as speech by 16 subjects. In these subjects, SWS stimuli elicited significantly stronger activity within the left posterior superior temporal sulcus (STSp) in the post- vs. pre-training session. In contrast, activity in this region was not enhanced after training in 5 subjects who did not perceive SWS stimuli as speech. Moreover, the control stimuli, which were always perceived as non-speech, elicited similar activity in this region in both sessions. Altogether, the present findings suggest that activation of the neural speech representations in the left STSp might be a pre-requisite for hearing sounds as speech.  相似文献   

12.
Mortensen MV  Mirz F  Gjedde A 《NeuroImage》2006,31(2):842-852
The left inferior prefrontal cortex (LIPC) is involved in speech comprehension by people who hear normally. In contrast, functional brain mapping has not revealed incremental activity in this region when users of cochlear implants comprehend speech without silent repetition. Functional brain maps identify significant changes of activity by comparing an active brain state with a presumed baseline condition. It is possible that cochlear implant users recruited alternative neuronal resources to the task in previous studies, but, in principle, it is also possible that an aberrant baseline condition masked the functional increase. To distinguish between the two possibilities, we tested the hypothesis that activity in the LIPC characterizes high speech comprehension in postlingually deaf CI users. We measured cerebral blood flow changes with positron emission tomography (PET) in CI users who listened passively to a range of speech and non-speech stimuli. The pattern of activation varied with the stimulus in users with high speech comprehension, unlike users with low speech comprehension. The high-comprehension group increased the activity in prefrontal and temporal regions of the cerebral cortex and in the right cerebellum. In these subjects, single words and speech raised activity in the LIPC, as well as in left and right temporal regions, both anterior and posterior, known to be activated in speech recognition and complex phoneme analysis in normal hearing. In subjects with low speech comprehension, sites of increased activity were observed only in the temporal lobes. We conclude that increased activity in areas of the LIPC and right temporal lobe is involved in speech comprehension after cochlear implantation.  相似文献   

13.
14.
The posterior medial parietal cortex and the left prefrontal cortex have both been implicated in the recollection of past episodes. In order to clarify their functional significance, we performed this functional magnetic resonance imaging study, which employed event-related source memory and item recognition retrieval of words paired with corresponding imagined or viewed pictures. Our results suggest that episodic source memory is related to a functional network including the posterior precuneus and the left lateral prefrontal cortex. This network is activated during explicit retrieval of imagined pictures and results from the retrieval of item-context associations. This suggests that previously imagined pictures provide a context with which encoded words can be more strongly associated.  相似文献   

15.
The key question in understanding the nature of speech perception is whether the human brain has unique speech-specific mechanisms or treats all sounds equally. We assessed possible differences between the processing of speech and complex nonspeech sounds in the two cerebral hemispheres by measuring the magnetic equivalent of the mismatch negativity, the brain's automatic change–detection response, which was elicited by speech sounds and by similarly complex nonspeech sounds with either fast or slow acoustic transitions. Our results suggest that the right hemisphere is predominant in the perception of slow acoustic transitions, whereas neither hemisphere clearly dominates the discrimination of nonspeech sounds with fast acoustic transitions. In contrast, the perception of speech stimuli with similarly rapid acoustic transitions was dominated by the left hemisphere, which may be explained by the presence of acoustic templates (long-term memory traces) for speech sounds formed in this hemisphere.  相似文献   

16.
Homae F  Yahata N  Sakai KL 《NeuroImage》2003,20(1):578-586
We present the results of correlation analyses for identifying temporally correlated activations between multiple regions of interest. We focused on functional connectivity for two regions in the prefrontal cortex: the left inferior frontal gyrus (L. F3t/F3O) and the left precentral sulcus (L. PrCS). Temporal correlations of functional magnetic resonance imaging signals were separately examined during a sentence comprehension task and a lexical decision task, thereby averaging data throughout all voxels within a region of interest used as a reference region. We found that the reciprocal connectivity between L. F3t/F3O and L. PrCS was significantly enhanced during sentence processing, but not during lexico-semantic processing, which was confirmed under both auditory and visual conditions. Furthermore, significantly correlated regions were mostly concentrated in the left prefrontal cortex during the sentence task. These results demonstrate that the functional connectivity within the left prefrontal cortex is selectively enhanced for processing sentences, which may subserve the use of syntactic information for integrating lexico-semantic information.  相似文献   

17.
When we cannot recall the name of a well-known person despite preserved access to his/her semantic knowledge, a phonological hint such as his/her initials sometimes helps us to recall the name. This type of recall failure appeared to occur by the transmission deficit from the lexical-semantic stage to the lexical-phonological stage in name recall processes, and the phonological cue appeared to activate this transmission, which leads to successful recall. We hypothesized that the brain regions responsible for the transmission would respond to the phonological cue that facilitates name recall, and would also respond to successful recall. A famous face image was presented with a phonological cue, and the subjects were required to recall and overtly pronounce the name during fMRI scanning. The behavioral results showed that the first syllable cue induced greater number of successful recall trials than both the non-verbal sound of the chime and the non-first syllable cue, suggesting that the first syllable facilitated name recall. The fMRI results demonstrated that two regions in the left superior temporal gyrus responded more strongly to the first syllable than both to the non-verbal sound of the chime and to the non-first syllable. In addition, these two regions were activated when the name recall was successful. These results suggest that two regions in the left superior temporal gyrus may play a crucial role in the transmission from the lexical-semantic to the lexical-phonological stage in the name recall processes.  相似文献   

18.
19.

Aim

To demonstrate the feasibility of doing a reliable rhythm analysis in the chest compression pauses (e.g. pauses for two ventilations) during cardiopulmonary resuscitation (CPR).

Methods

We extracted 110 shockable and 466 nonshockable segments from 235 out-of-hospital cardiac arrest episodes. Pauses in chest compressions were already annotated in the episodes. We classified pauses as ventilation or non-ventilation pause using the transthoracic impedance. A high-temporal resolution shock advice algorithm (SAA) that gives a shock/no-shock decision in 3 s was launched once for every pause longer than 3 s. The sensitivity and specificity of the SAA for the analyses during the pauses were computed.

Results

We identified 4476 pauses, 3263 were ventilation pauses and 2183 had two ventilations. The median of the mean duration per segment of all pauses and of pauses with two ventilations were 6.1 s (4.9–7.5 s) and 5.1 s (4.2–6.4 s), respectively. A total of 91.8% of the pauses and 95.3% of the pauses with two ventilations were long enough to launch the SAA. The overall sensitivity and specificity were 95.8% (90% low one-sided CI, 94.3%) and 96.8% (CI, 96.2%), respectively. There were no significant differences between the sensitivities (P = 0.84) and the specificities (P = 0.18) for the ventilation and the non-ventilation pauses.

Conclusion

Chest compression pauses are frequent and of sufficient duration to launch a high-temporal resolution SAA. During these pauses rhythm analysis was reliable. Pre-shock pauses could be minimised by analysing the rhythm during ventilation pauses when CPR is delivered at 30:2 compression:ventilation ratio.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号