首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 781 毫秒
1.
Beauchamp MS  Yasar NE  Frye RE  Ro T 《NeuroImage》2008,41(3):1011-1020
Human superior temporal sulcus (STS) is thought to be a key brain area for multisensory integration. Many neuroimaging studies have reported integration of auditory and visual information in STS but less is known about the role of STS in integrating other sensory modalities. In macaque STS, the superior temporal polysensory area (STP) responds to somatosensory, auditory and visual stimulation. To determine if human STS contains a similar area, we measured brain responses to somatosensory, auditory and visual stimuli using blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI). An area in human posterior STS, STSms (multisensory), responded to stimulation in all three modalities. STSms responded during both active and passive presentation of unisensory somatosensory stimuli and showed larger responses for more intense vs. less intense tactile stimuli, hand vs. foot, and contralateral vs. ipsilateral tactile stimulation. STSms showed responses of similar magnitude for unisensory tactile and auditory stimulation, with an enhanced response to simultaneous auditory-tactile stimulation. We conclude that STSms is important for integrating information from the somatosensory as well as the auditory and visual modalities, and could be the human homolog of macaque STP.  相似文献   

2.
Vibrotactile stimuli can facilitate hearing, both in hearing-impaired and in normally hearing people. Accordingly, the sounds of hands exploring a surface contribute to the explorer's haptic percepts. As a possible brain basis of such phenomena, functional brain imaging has identified activations specific to audiotactile interaction in secondary somatosensory cortex, auditory belt area, and posterior parietal cortex, depending on the quality and relative salience of the stimuli. We studied 13 subjects with non-invasive functional magnetic resonance imaging (fMRI) to search for auditory brain areas that would be activated by touch. Vibration bursts of 200 Hz were delivered to the subjects' fingers and palm and tactile pressure pulses to their fingertips. Noise bursts served to identify auditory cortex. Vibrotactile-auditory co-activation, addressed with minimal smoothing to obtain a conservative estimate, was found in an 85-mm3 region in the posterior auditory belt area. This co-activation could be related to facilitated hearing at the behavioral level, reflecting the analysis of sound-like temporal patterns in vibration. However, even tactile pulses (without any vibration) activated parts of the posterior auditory belt area, which therefore might subserve processing of audiotactile events that arise during dynamic contact between hands and environment.  相似文献   

3.
Adams RB  Janata P 《NeuroImage》2002,16(2):361-377
Knowledge about environmental objects derives from representations of multiple object features both within and across sensory modalities. While our understanding of the neural basis for visual object representation in the human and nonhuman primate brain is well advanced, a similar understanding of auditory objects is in its infancy. We used a name verification task and functional magnetic resonance imaging (fMRI) to characterize the neural circuits that are activated as human subjects match visually presented words with either simultaneously presented pictures or environmental sounds. The difficulty of the matching judgment was manipulated by varying the level of semantic detail at which the words and objects were compared. We found that blood oxygen level dependent (BOLD) signal was modulated in ventral and dorsal regions of the inferior frontal gyrus of both hemispheres during auditory and visual object categorization, potentially implicating these areas as sites for integrating polymodal object representations with concepts in semantic memory. As expected, BOLD signal increases in the fusiform gyrus varied with the semantic level of object categorization, though this effect was weak and restricted to the left hemisphere in the case of auditory objects.  相似文献   

4.
Segregation of information flow along a dorsally directed pathway for processing object location and a ventrally directed pathway for processing object identity is well established in the visual and auditory systems, but is less clear in the somatosensory system. We hypothesized that segregation of location vs. identity information in touch would be evident if texture is the relevant property for stimulus identity, given the salience of texture for touch. Here, we used functional magnetic resonance imaging (fMRI) to investigate whether the pathways for haptic and visual processing of location and texture are segregated, and the extent of bisensory convergence. Haptic texture-selectivity was found in the parietal operculum and posterior visual cortex bilaterally, and in parts of left inferior frontal cortex. There was bisensory texture-selectivity at some of these sites in posterior visual and left inferior frontal cortex. Connectivity analyses demonstrated, in each modality, flow of information from unisensory non-selective areas to modality-specific texture-selective areas and further to bisensory texture-selective areas. Location-selectivity was mostly bisensory, occurring in dorsal areas, including the frontal eye fields and multiple regions around the intraparietal sulcus bilaterally. Many of these regions received input from unisensory areas in both modalities. Together with earlier studies, the activation and connectivity analyses of the present study establish that somatosensory processing flows into segregated pathways for location and object identity information. The location-selective somatosensory pathway converges with its visual counterpart in dorsal frontoparietal cortex, while the texture-selective somatosensory pathway runs through the parietal operculum before converging with its visual counterpart in visual and frontal cortex. Both segregation of sensory processing according to object property and multisensory convergence appear to be universal organizing principles.  相似文献   

5.
Although visual cortical engagement in haptic shape perception is well established, its relationship with visual imagery remains controversial. We addressed this using functional magnetic resonance imaging during separate visual object imagery and haptic shape perception tasks. Two experiments were conducted. In the first experiment, the haptic shape task employed unfamiliar, meaningless objects, whereas familiar objects were used in the second experiment. The activations evoked by visual object imagery overlapped more extensively, and their magnitudes were more correlated, with those evoked during haptic shape perception of familiar, compared to unfamiliar, objects. In the companion paper (Deshpande et al., this issue), we used task-specific functional and effective connectivity analyses to provide convergent evidence: these analyses showed that the neural networks underlying visual imagery were similar to those underlying haptic shape perception of familiar, but not unfamiliar, objects. We conclude that visual object imagery is more closely linked to haptic shape perception when objects are familiar, compared to when they are unfamiliar.  相似文献   

6.
Four normally-hearing subjects were trained and tested with all combinations of a highly-degraded auditory input, a visual input via lipreading, and a tactile input using a multichannel electrotactile speech processor. The speech perception of the subjects was assessed with closed sets of vowels, consonants, and multisyllabic words; with open sets of words and sentences, and with speech tracking. When the visual input was added to any combination of other inputs, a significant improvement occurred for every test. Similarly, the auditory input produced a significant improvement for all tests except closed-set vowel recognition. The tactile input produced scores that were significantly greater than chance in isolation, but combined less effectively with the other modalities. The addition of the tactile input did produce significant improvements for vowel recognition in the auditory-tactile condition, for consonant recognition in the auditory-tactile and visual-tactile conditions, and in open-set word recognition in the visual-tactile condition. Information transmission analysis of the features of vowels and consonants indicated that the information from auditory and visual inputs were integrated much more effectively than information from the tactile input. The less effective combination might be due to lack of training with the tactile input, or to more fundamental limitations in the processing of multimodal stimuli.  相似文献   

7.
Representation of manipulable man-made objects in the dorsal stream   总被引:19,自引:0,他引:19  
Chao LL  Martin A 《NeuroImage》2000,12(4):478-484
We used fMRI to examine the neural response in frontal and parietal cortices associated with viewing and naming pictures of different categories of objects. Because tools are commonly associated with specific hand movements, we predicted that pictures of tools, but not other categories of objects, would elicit activity in regions of the brain that store information about motor-based properties. We found that viewing and naming pictures of tools selectively activated the left ventral premotor cortex (BA 6). Single-unit recording studies in monkeys have shown that neurons in the rostral part of the ventral premotor cortex (canonical F5 neurons) respond to the visual presentation of graspable objects, even in the absence of any subsequent motor activity. Thus, the left ventral premotor region that responded selectively to tools in the current study may be the human homolog of the monkey canonical F5 area. Viewing and naming tools also selectively activated the left posterior parietal cortex (BA 40). This response is similar to the firing of monkey anterior intraparietal neurons to the visual presentation of graspable objects. In humans and monkeys, there appears to be a close link between manipulable objects and information about the actions associated with their use. The selective activation of the left posterior parietal and left ventral premotor cortices by pictures of tools suggests that the ability to recognize and identify at least one category of objects (tools) may depend on activity in specific sites of the ventral and dorsal visual processing streams.  相似文献   

8.
Rhythm is an essential element of human culture, particularly in language and music. To acquire language or music, we have to perceive the sensory inputs, organize them into structured sequences as rhythms, actively hold the rhythm information in mind, and use the information when we reproduce or mimic the same rhythm. Previous brain imaging studies have elucidated brain regions related to the perception and production of rhythms. However, the neural substrates involved in the working memory of rhythm remain unclear. In addition, little is known about the processing of rhythm information from non-auditory inputs (visual or tactile). Therefore, we measured brain activity by functional magnetic resonance imaging while healthy subjects memorized and reproduced auditory and visual rhythmic information. The inferior parietal lobule, inferior frontal gyrus, supplementary motor area, and cerebellum exhibited significant activations during both encoding and retrieving rhythm information. In addition, most of these areas exhibited significant activation also during the maintenance of rhythm information. All of these regions functioned in the processing of auditory and visual rhythms. The bilateral inferior parietal lobule, inferior frontal gyrus, supplementary motor area, and cerebellum are thought to be essential for motor control. When we listen to a certain rhythm, we are often stimulated to move our body, which suggests the existence of a strong interaction between rhythm processing and the motor system. Here, we propose that rhythm information may be represented and retained as information about bodily movements in the supra-modal motor brain system.  相似文献   

9.
What vs. where in touch: an fMRI study   总被引:2,自引:0,他引:2  
Reed CL  Klatzky RL  Halgren E 《NeuroImage》2005,25(3):718-726
Two streams have been identified in cortical visual processing: a ventral stream for form, color, and features, and a dorsal stream for spatial characteristics and motion. We investigated whether similar "what" and "where" dissociations of function exist for human somatosensory processing. Using identical stimuli and hand movements, subjects either performed tactile object recognition (TOR) and ignored location or performed tactile object localization (LOC) and ignored identity. A matched-movement control task separated activation associated with sensorimotor input from higher-level cognitive contributions. Results confirmed separate processing streams for TOR and LOC. TOR activated the frontal pole as well as bilateral inferior parietal and left prefrontal regions involved in tactile feature integration and naming. LOC activated bilateral superior parietal areas involved in spatial processing. The dissociation of object and spatial processing streams appears to be a modality general organizational principle in the brain.  相似文献   

10.
Previous neurophysiological and neuroimaging studies have shown that a cortical network involving the inferior frontal gyrus (IFG), inferior parietal lobe (IPL) and cortical areas in and around the posterior superior temporal sulcus (pSTS) region is employed in action understanding by vision and audition. However, the brain regions that are involved in action understanding by touch are unknown. Lederman et al. (2007) recently demonstrated that humans can haptically recognize facial expressions of emotion (FEE) surprisingly well. Here, we report a functional magnetic resonance imaging (fMRI) study in which we test the hypothesis that the IFG, IPL and pSTS regions are involved in haptic, as well as visual, FEE identification. Twenty subjects haptically or visually identified facemasks with three different FEEs (disgust, neutral and happiness) and casts of shoes (shoes) of three different types. The left posterior middle temporal gyrus, IPL, IFG and bilateral precentral gyrus were activated by FEE identification relative to that of shoes, regardless of sensory modality. By contrast, an inferomedial part of the left superior parietal lobule was activated by haptic, but not visual, FEE identification. Other brain regions, including the lingual gyrus and superior frontal gyrus, were activated by visual identification of FEEs, relative to haptic identification of FEEs. These results suggest that haptic and visual FEE identification rely on distinct but overlapping neural substrates including the IFG, IPL and pSTS region.  相似文献   

11.
The myth of the visual word form area   总被引:13,自引:0,他引:13  
Price CJ  Devlin JT 《NeuroImage》2003,19(3):473-481
Recent functional imaging studies have referred to a posterior region of the left midfusiform gyrus as the "visual word form area" (VWFA). We review the evidence for this claim and argue that neither the neuropsychological nor neuroimaging data are consistent with a cortical region specialized for visual word form representations. Specifically, there are no reported cases of pure alexia who have deficits limited to visual word form processing and damage limited to the left midfusiform. In addition, we present functional imaging data to demonstrate that the so-called VWFA is activated by normal subjects during tasks that do not engage visual word form processing such as naming colors, naming pictures, reading Braille, repeating auditory words, and making manual action responses to pictures of meaningless objects. If the midfusiform region has a single function that underlies all these tasks, then it does not correspond to visual word form processing. On the other hand, if the region participates in several functions as defined by its interactions with other cortical areas, then identifying the neural system sustaining visual word form representations requires identification of the set of regions involved. We conclude that there is no evidence that visual word form representations are subtended by a single patch of neuronal cortex and it is misleading to label the left midfusiform region as the visual word form area.  相似文献   

12.
Humans, like numerous other species, strongly rely on the observation of gestures of other individuals in their everyday life. It is hypothesized that the visual processing of human gestures is sustained by a specific functional architecture, even at an early prelexical cognitive stage, different from that required for the processing of other visual entities. In the present PET study, the neural basis of visual gesture analysis was investigated with functional neuroimaging of brain activity during naming and orientation tasks performed on pictures of either static gestures (upper-limb postures) or tridimensional objects. To prevent automatic object-related cerebral activation during the visual processing of postures, only intransitive postures were selected, i.e., symbolic or meaningless postures which do not imply the handling of objects. Conversely, only intransitive objects which cannot be handled were selected to prevent gesture-related activation during their visual processing. Results clearly demonstrate a significant functional segregation between the processing of static intransitive postures and the processing of intransitive tridimensional objects. Visual processing of objects elicited mainly occipital and fusiform gyrus activity, while visual processing of postures strongly activated the lateral occipitotemporal junction, encroaching upon area MT/V5, involved in motion analysis. These findings suggest that the lateral occipitotemporal junction, working in association with area MT/V5, plays a prominent role in the high-level perceptual analysis of gesture, namely the construction of its visual representation, available for subsequent recognition or imitation.  相似文献   

13.
Malinen S  Hlushchuk Y  Hari R 《NeuroImage》2007,35(1):131-139
In search for suitable tools to study brain activation in natural environments, where the stimuli are multimodal, poorly predictable and irregularly varying, we collected functional magnetic resonance imaging data from 6 subjects during a continuous 8-min stimulus sequence that comprised auditory (speech or tone pips), visual (video clips dominated by faces, hands, or buildings), and tactile finger stimuli in blocks of 6-33 s. Results obtained by independent component analysis (ICA) and general-linear-model-based analysis (GLM) were compared. ICA separated in the superior temporal gyrus one independent component (IC) that reacted to all auditory stimuli and in the superior temporal sulcus another IC responding only to speech. Several distinct and rather symmetric vision-sensitive ICs were found in the posterior brain. An IC in the V5/MT region reacted to videos depicting faces or hands, whereas ICs in the V1/V2 region reacted to all video clips, including buildings. The corresponding GLM-derived activations in the auditory and early visual cortices comprised sub-areas of the ICA-revealed activations. ICA separated a prominent IC in the primary somatosensory cortex whereas the GLM-based analysis failed to show any touch-related activation. "Intrinsic" components, unrelated to the stimuli but spatially consistent across subjects, were discerned as well. The individual time courses were highly consistent in sensory projection cortices and more variable elsewhere. The ability to differentiate functionally meaningful composites of activated brain areas and to straightforwardly reveal their temporal dynamics renders ICA a sensitive tool to study brain responses to complex natural stimuli.  相似文献   

14.
Frey S  Kostopoulos P  Petrides M 《NeuroImage》2004,22(3):1384-1389
Having recently demonstrated that the human orbitofrontal cortex is selectively activated during the encoding of visual information, we investigated whether this same frontal region, which is directly connected to medial temporal structures, would be activated during the encoding of auditory stimuli. We measured cerebral blood flow (CBF) with positron emission tomography (PET) during the encoding of nonverbal abstract auditory stimuli in a group of young healthy volunteers. The results demonstrate that the left orbitofrontal cortex, area 11 in particular, is involved in the encoding of auditory information. We suggest that the orbitofrontal cortex is a critical frontal region that can exert top-down regulation of other regions of the brain including the medial temporal structures and the lateral frontal cortex, enabling the further processing of information.  相似文献   

15.
Human neuroplasticity of multisensory integration has been studied mainly in the context of natural or artificial training situations in healthy subjects. However, regular smokers also offer the opportunity to assess the impact of intensive daily multisensory interactions with smoking-related objects on the neural correlates of crossmodal object processing. The present functional magnetic resonance imaging study revealed that smokers show a comparable visuo-haptic integration pattern for both smoking paraphernalia and control objects in the left lateral occipital complex, a region playing a crucial role in crossmodal object recognition. Moreover, the degree of nicotine dependence correlated positively with the magnitude of visuo-haptic integration in the left lateral occipital complex (LOC) for smoking-associated but not for control objects. In contrast, in the left LOC non-smokers displayed a visuo-haptic integration pattern for control objects, but not for smoking paraphernalia. This suggests that prolonged smoking-related multisensory experiences in smokers facilitate the merging of visual and haptic inputs in the lateral occipital complex for the respective stimuli. Studying clinical populations who engage in compulsive activities may represent an ecologically valid approach to investigating the neuroplasticity of multisensory integration.  相似文献   

16.
We used functional magnetic resonance imaging (fMRI) to characterize cortical activation associated with sentence processing, thereby elucidating where in the brain auditory and visual inputs of words converge during sentence comprehension. Within one scanning session, subjects performed three types of tasks with different linguistic components from perception to sentence comprehension: nonword (N(AV); auditory and visual), phrase (P; either auditory or visual), and sentence (S; either auditory or visual) tasks. In a comparison of the P and N(AV) tasks, the angular and supramarginal gyri showed bilateral activation, whereas the inferior and middle frontal gyri showed left-lateralized activation. A comparison of the S and P tasks, together with a conjunction analysis, revealed a ventral region of the left inferior frontal gyrus (F3t/F3O), which was sentence-processing selective and modality-independent. These results unequivocally demonstrated that the left F3t/F3O is involved in the selection and integration of semantic information that are separable from lexico-semantic processing.  相似文献   

17.
C J Moore  C J Price 《NeuroImage》1999,10(2):181-192
This study investigates word and object processing during naming and viewing tasks and identifies three distinct regions in the left ventral occipitotemporal cortex. Irrespective of task, words and objects (relative to meaningless visual controls) activated the medial surface of the left anterior fusiform gyrus, a region that has previously been associated with semantic knowledge. A more lateral region was differentially active for naming words and objects relative to viewing the same stimuli and a more posterior region was differentially active for objects relative to words irrespective of task. In addition, we found that word processing resulted in greater activation than object processing on the dorsal surface of the left superior temporal gyrus and the left supramarginal gyrus. These regions appear to be important for converting orthography into phonology; their response to words irrespective of task is consistent with established psychological evidence that implicit phonological processing is stronger for words than objects.  相似文献   

18.
A visual task for semantic access involves a number of brain regions. However, previous studies either examined the role of each region separately using univariate approach, or analyzed a single brain network using covariance connectivity analysis. We hypothesize that these brain regions construct several functional networks underpinning a word semantic access task, these networks being engaged in different cognitive components with distinct temporal characters. In this paper, multivariate independent component analysis (ICA) was used to reveal these networks based on functional magnetic resonance imaging (fMRI) data acquired during a visual and an auditory word semantic judgment task. Our results demonstrated that there were three task-related independent components (ICs), corresponding to various cognitive components involved in the visual task. Furthermore, ICA separation on the auditory task showed consistency of the results with our hypothesis, regardless of the input modalities.  相似文献   

19.
Murray MM  Camen C  Spierer L  Clarke S 《NeuroImage》2008,39(2):847-856
The rapid and precise processing of environmental sounds contributes to communication functions as well as both object recognition and localization. Plasticity in (accessing) the neural representations of environmental sounds is likewise essential for an adaptive organism, in particular humans, and can be indexed by repetition priming. How the brain achieves such plasticity with representations of environmental sounds is presently unresolved. Electrical neuroimaging of 64-channel auditory evoked potentials (AEPs) in humans identified the spatio-temporal brain mechanisms of repetition priming involving sounds of environmental objects. Subjects performed an 'oddball' target detection task, based on the semantic category of stimuli (living vs. man-made objects). Repetition priming effects were observed behaviorally as a speeding of reaction times and electrophysiologically as a suppression of the strength of responses to repeated sound presentations over the 156-215 ms post-stimulus period. These effects of plasticity were furthermore localized, using statistical analyses of a distributed linear inverse solution, to the left middle temporal gyrus and superior temporal sulcus (BA22), which have been implicated in associating sounds with their abstract representations and actions. These effects are subsequent to and occur in different brain regions from what has been previously identified as the earliest discrimination of auditory object categories. Plasticity in associative-semantic, rather than perceptual-discriminative functions, may underlie repetition priming of sounds of objects. We present a multi-stage mechanism of auditory object processing akin to what has been described for visual object processing and which also provides a framework for accessing multisensory object representations.  相似文献   

20.
Previous behavioral data suggest that the salience of taxonomic (e.g., hammer-saw) and thematic (e.g., hammer-nail) conceptual relations depends on object categories. Furthermore, taxonomic and thematic relations would be differentially grounded in the sensory-motor system. Using a picture matching task, we asked adult participants to identify taxonomic and thematic relations for non-manipulable and manipulable natural and artifact targets (e.g., animals, fruit, tools and vehicles, respectively) inside and outside a 3 T MR scanner. Behavioral data indicated that taxonomic relations are identified faster in natural objects while thematic relations are processed faster in artifacts, particularly manipulable ones (e.g., tools). Neuroimaging findings revealed that taxonomic processing specifically activates the bilateral visual areas (cuneus, BA 18), particularly for non-manipulable natural objects (e.g., animals). On the contrary, thematic processing specifically recruited a bilateral temporo-parietal network including the inferior parietal lobules (IPL, BA 40) and middle temporal gyri (MTG, BA 39/21/22). Left IPL and MTG activation was stronger for manipulable than for non-manipulable artifacts (e.g., tools vs. vehicles) during thematic processing. Right IPL and MTG activation was greater for both artifacts compared to natural objects during thematic processing (manipulable and non-manipulable ones, e.g., tools and vehicles). While taxonomic relations would selectively rely on perceptual similarity processing, thematic relations would specifically activate visuo-motor regions involved in action and space processing. In line with embodied views of concepts, our findings show that taxonomic and thematic conceptual relations are based on different sensory-motor processes. It suggests that they may have different roles in concept formation and processing depending on object categories.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号