首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 953 毫秒
1.
Jessen S  Kotz SA 《NeuroImage》2011,58(2):665-674
Face-to-face communication works multimodally. Not only do we employ vocal and facial expressions; body language provides valuable information as well. Here we focused on multimodal perception of emotion expressions, monitoring the temporal unfolding of the interaction of different modalities in the electroencephalogram (EEG). In the auditory condition, participants listened to emotional interjections such as "ah", while they saw mute video clips containing emotional body language in the visual condition. In the audiovisual condition participants saw video clips with matching interjections. In all three conditions, the emotions "anger" and "fear", as well as non-emotional stimuli were used. The N100 amplitude was strongly reduced in the audiovisual compared to the auditory condition, suggesting a significant impact of visual information on early auditory processing. Furthermore, anger and fear expressions were distinct in the auditory but not the audiovisual condition. Complementing these event-related potential (ERP) findings, we report strong similarities in the alpha- and beta-band in the visual and the audiovisual conditions, suggesting a strong visual processing component in the perception of audiovisual stimuli. Overall, our results show an early interaction of modalities in emotional face-to-face communication using complex and highly natural stimuli.  相似文献   

2.
Human brain activity associated with audiovisual perception and attention   总被引:1,自引:0,他引:1  
Coherent perception of objects in our environment often requires perceptual integration of auditory and visual information. Recent behavioral data suggest that audiovisual integration depends on attention. The current study investigated the neural basis of audiovisual integration using 3-Tesla functional magnetic resonance imaging (fMRI) in 12 healthy volunteers during attention to auditory or visual features, or audiovisual feature combinations of abstract stimuli (simultaneous harmonic sounds and colored circles). Audiovisual attention was found to modulate activity in the same frontal, temporal, parietal and occipital cortical regions as auditory and visual attention. In addition, attention to audiovisual feature combinations produced stronger activity in the superior temporal cortices than attention to only auditory or visual features. These modality-specific areas might be involved in attention-dependent perceptual binding of synchronous auditory and visual events into coherent audiovisual objects. Furthermore, the modality-specific temporal auditory and occipital visual cortical areas showed attention-related modulations during both auditory and visual attention tasks. This result supports the proposal that attention to stimuli in one modality can spread to encompass synchronously presented stimuli in another modality.  相似文献   

3.
Speech perception can use not only auditory signals, but also visual information from seeing the speaker's mouth. The relative timing and relative location of auditory and visual inputs are both known to influence crossmodal integration psychologically, but previous imaging studies of audiovisual speech focused primarily on just temporal aspects. Here we used Positron Emission Tomography (PET) during audiovisual speech processing to study how temporal and spatial factors might jointly affect brain activations. In agreement with previous work, synchronous versus asynchronous audiovisual speech yielded increased activity in multisensory association areas (e.g., superior temporal sulcus [STS]), plus in some unimodal visual areas. Our orthogonal manipulation of relative stimulus position (auditory and visual stimuli presented at same location vs. opposite sides) and stimulus synchrony showed that (i) ventral occipital areas and superior temporal sulcus were unaffected by relative location; (ii) lateral and dorsal occipital areas were selectively activated for synchronous bimodal stimulation at the same external location; (iii) right inferior parietal lobule was activated for synchronous auditory and visual stimuli at different locations, that is, in the condition classically associated with the 'ventriloquism effect' (shift of perceived auditory position toward the visual location). Thus, different brain regions are involved in different aspects of audiovisual integration. While ventral areas appear more affected by audiovisual synchrony (which can influence speech identification), more dorsal areas appear to be associated with spatial multisensory interactions.  相似文献   

4.
Beauchamp MS  Yasar NE  Frye RE  Ro T 《NeuroImage》2008,41(3):1011-1020
Human superior temporal sulcus (STS) is thought to be a key brain area for multisensory integration. Many neuroimaging studies have reported integration of auditory and visual information in STS but less is known about the role of STS in integrating other sensory modalities. In macaque STS, the superior temporal polysensory area (STP) responds to somatosensory, auditory and visual stimulation. To determine if human STS contains a similar area, we measured brain responses to somatosensory, auditory and visual stimuli using blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI). An area in human posterior STS, STSms (multisensory), responded to stimulation in all three modalities. STSms responded during both active and passive presentation of unisensory somatosensory stimuli and showed larger responses for more intense vs. less intense tactile stimuli, hand vs. foot, and contralateral vs. ipsilateral tactile stimulation. STSms showed responses of similar magnitude for unisensory tactile and auditory stimulation, with an enhanced response to simultaneous auditory-tactile stimulation. We conclude that STSms is important for integrating information from the somatosensory as well as the auditory and visual modalities, and could be the human homolog of macaque STP.  相似文献   

5.

Background

Photophobia and phonophobia are the most prominent symptoms in patients with migraine without aura. Hypersensitivity to visual stimuli can lead to greater hypersensitivity to auditory stimuli, which suggests that the interaction between visual and auditory stimuli may play an important role in the pathogenesis of migraine. However, audiovisual temporal interactions in migraine have not been well studied. Therefore, our aim was to examine auditory and visual interactions in migraine.

Methods

In this study, visual, auditory, and audiovisual stimuli with different temporal intervals between the visual and auditory stimuli were randomly presented to the left or right hemispace. During this time, the participants were asked to respond promptly to target stimuli. We used cumulative distribution functions to analyze the response times as a measure of audiovisual integration.

Results

Our results showed that audiovisual integration was significantly elevated in the migraineurs compared with the normal controls (p < 0.05); however, audiovisual suppression was weaker in the migraineurs compared with the normal controls (p < 0.05).

Conclusions

Our findings further objectively support the notion that migraineurs without aura are hypersensitive to external visual and auditory stimuli. Our study offers a new quantitative and objective method to evaluate hypersensitivity to audio-visual stimuli in patients with migraine.  相似文献   

6.
To form a unified percept of our environment, the human brain integrates information within and across the senses. This MEG study investigated interactions within and between sensory modalities using a frequency analysis of steady-state responses that are elicited time-locked to periodically modulated stimuli. Critically, in the frequency domain, interactions between sensory signals are indexed by crossmodulation terms (i.e. the sums and differences of the fundamental frequencies). The 3 × 2 factorial design, manipulated (1) modality: auditory, visual or audiovisual (2) steady-state modulation: the auditory and visual signals were modulated only in one sensory feature (e.g. visual gratings modulated in luminance at 6 Hz) or in two features (e.g. tones modulated in frequency at 40 Hz & amplitude at 0.2 Hz). This design enabled us to investigate crossmodulation frequencies that are elicited when two stimulus features are modulated concurrently (i) in one sensory modality or (ii) in auditory and visual modalities. In support of within-modality integration, we reliably identified crossmodulation frequencies when two stimulus features in one sensory modality were modulated at different frequencies. In contrast, no crossmodulation frequencies were identified when information needed to be combined from auditory and visual modalities. The absence of audiovisual crossmodulation frequencies suggests that the previously reported audiovisual interactions in primary sensory areas may mediate low level spatiotemporal coincidence detection that is prominent for stimulus transients but less relevant for sustained SSR responses. In conclusion, our results indicate that information in SSRs is integrated over multiple time scales within but not across sensory modalities at the primary cortical level.  相似文献   

7.
Emotion in daily life is often expressed in a multimodal fashion. Consequently emotional information from one modality can influence processing in another. In a previous fMRI study we assessed the neural correlates of audio-visual integration and found that activity in the left amygdala is significantly attenuated when a neutral stimulus is paired with an emotional one compared to conditions where emotional stimuli were present in both channels. Here we used dynamic causal modelling to investigate the effective connectivity in the neuronal network underlying this emotion presence congruence effect. Our results provided strong evidence in favor of a model family, differing only in the interhemispheric interactions. All winning models share a connection from the bilateral fusiform gyrus (FFG) into the left amygdala and a non-linear modulatory influence of bilateral posterior superior temporal sulcus (pSTS) on these connections. This result indicates that the pSTS not only integrates multi-modal information from visual and auditory regions (as reflected in our model by significant feed-forward connections) but also gates the influence of the sensory information on the left amygdala, leading to attenuation of amygdala activity when a neutral stimulus is integrated. Moreover, we found a significant lateralization of the FFG due to stronger driving input by the stimuli (faces) into the right hemisphere, whereas such lateralization was not present for sound-driven input into the superior temporal gyrus. In summary, our data provides further evidence for a rightward lateralization of the FFG and in particular for a key role of the pSTS in the integration and gating of audio-visual emotional information.  相似文献   

8.
Converging evidence suggests that the left superior temporal sulcus (STS) is a critical site for multisensory integration of auditory and visual information during speech perception. We report a patient, SJ, who suffered a stroke that damaged the left tempo-parietal area, resulting in mild anomic aphasia. Structural MRI showed complete destruction of the left middle and posterior STS, as well as damage to adjacent areas in the temporal and parietal lobes. Surprisingly, SJ demonstrated preserved multisensory integration measured with two independent tests. First, she perceived the McGurk effect, an illusion that requires integration of auditory and visual speech. Second, her perception of morphed audiovisual speech with ambiguous auditory or visual information was significantly influenced by the opposing modality. To understand the neural basis for this preserved multisensory integration, blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI) was used to examine brain responses to audiovisual speech in SJ and 23 healthy age-matched controls. In controls, bilateral STS activity was observed. In SJ, no activity was observed in the damaged left STS but in the right STS, more cortex was active in SJ than in any of the normal controls. Further, the amplitude of the BOLD response in right STS response to McGurk stimuli was significantly greater in SJ than in controls. The simplest explanation of these results is a reorganization of SJ's cortical language networks such that the right STS now subserves multisensory integration of speech.  相似文献   

9.
We investigated cerebral processing of audiovisual speech stimuli in humans using functional magnetic resonance imaging (fMRI). Ten healthy volunteers were scanned with a 'clustered volume acquisition' paradigm at 3 T during observation of phonetically matching (e.g., visual and acoustic /y/) and conflicting (e.g., visual /a/ and acoustic /y/) audiovisual vowels. Both stimuli activated the sensory-specific auditory and visual cortices, along with the superior temporal, inferior frontal (Broca's area), premotor, and visual-parietal regions bilaterally. Phonetically conflicting vowels, contrasted with matching ones, specifically increased activity in Broca's area. Activity during phonetically matching stimuli, contrasted with conflicting ones, was not enhanced in any brain region. We suggest that the increased activity in Broca's area reflects processing of conflicting visual and acoustic phonetic inputs in partly disparate neuron populations. On the other hand, matching acoustic and visual inputs would converge on the same neurons.  相似文献   

10.
In dynamic cluttered environments, audition and vision may benefit from each other in determining what deserves further attention and what does not. We investigated the underlying neural mechanisms responsible for attentional guidance by audiovisual stimuli in such an environment. Event-related potentials (ERPs) were measured during visual search through dynamic displays consisting of line elements that randomly changed orientation. Search accuracy improved when a target orientation change was synchronized with an auditory signal as compared to when the auditory signal was absent or synchronized with a distractor orientation change. The ERP data show that behavioral benefits were related to an early multisensory interaction over left parieto-occipital cortex (50-60 ms post-stimulus onset), which was followed by an early positive modulation (80-100 ms) over occipital and temporal areas contralateral to the audiovisual event, an enhanced N2pc (210-250 ms), and a contralateral negative slow wave (CNSW). The early multisensory interaction was correlated with behavioral search benefits, indicating that participants with a strong multisensory interaction benefited the most from the synchronized auditory signal. We suggest that an auditory signal enhances the neural response to a synchronized visual event, which increases the chances of selection in a multiple object environment.  相似文献   

11.
12.
Visual emotional stimuli evoke enhanced activation in early visual cortex areas which may help organisms to quickly detect biologically salient cues and initiate appropriate approach or avoidance behavior. Functional neuroimaging evidence for the modulation of other sensory modalities by emotion is scarce. Therefore, the aim of the present study was to test whether sensory facilitation by emotional cues can also be found in the auditory domain. We recorded auditory brain activation with functional near-infrared-spectroscopy (fNIRS), a non-invasive and silent neuroimaging technique, while participants were listening to standardized pleasant, unpleasant, and neutral sounds selected from the International Affective Digitized Sound System (IADS). Pleasant and unpleasant sounds led to increased auditory cortex activation as compared to neutral sounds. This is the first study to suggest that the enhanced activation of sensory areas in response to complex emotional stimuli is apparently not restricted to the visual domain but is also evident in the auditory domain.  相似文献   

13.
Neurophysiological research suggests that understanding the actions of others harnesses neural circuits that would be used to produce those actions directly. We used fMRI to examine brain areas active during language comprehension in which the speaker was seen and heard while talking (audiovisual) or heard but not seen (audio-alone) or when the speaker was seen talking with the audio track removed (video-alone). We found that audiovisual speech perception activated a network of brain regions that included cortical motor areas involved in planning and executing speech production and areas subserving proprioception related to speech production. These regions included the posterior part of the superior temporal gyrus and sulcus, the pars opercularis, premotor cortex, adjacent primary motor cortex, somatosensory cortex, and the cerebellum. Activity in premotor cortex and posterior superior temporal gyrus and sulcus was modulated by the amount of visually distinguishable phonemes in the stories. None of these regions was activated to the same extent in the audio- or video-alone conditions. These results suggest that integrating observed facial movements into the speech perception process involves a network of multimodal brain regions associated with speech production and that these areas contribute less to speech perception when only auditory signals are present. This distributed network could participate in recognition processing by interpreting visual information about mouth movements as phonetic information based on motor commands that could have generated those movements.  相似文献   

14.
Emotions are often encountered in a multimodal fashion. Consequently, contextual framing by other modalities can alter the way that an emotional facial expression is perceived and lead to emotional conflict. Whole brain fMRI data was collected when 35 healthy subjects judged emotional expressions in faces while concurrently being exposed to emotional (scream, laughter) or neutral (yawning) sounds. The behavioral results showed that subjects rated fearful and neutral faces as being more fearful when accompanied by screams than compared to yawns (and laughs for fearful faces). Moreover, the imaging data revealed that incongruence of emotional valence between faces and sounds led to increased activation in the middle cingulate cortex, right superior frontal cortex, right supplementary motor area as well as the right temporoparietal junction. Against expectations no incongruence effects could be found in the amygdala. Further analyses revealed that, independent of emotional valence congruency, the left amygdala was consistently activated when the information from both modalities was emotional. If a neutral stimulus was present in one modality and emotional in the other, activation in the left amygdala was significantly attenuated. These results indicate that incongruence of emotional valence in audiovisual integration activates a cingulate-fronto-parietal network involved in conflict monitoring and resolution. Furthermore in audiovisual pairing amygdala responses seem to signal also the absence of any neutral feature rather than only the presence of an emotionally charged one.  相似文献   

15.
Oscillatory activity in the gamma-band range in human magneto- and electroencephalogram is thought to reflect the oscillatory synchronization of cortical networks. Findings of enhanced gamma-band activity (GBA) during cognitive processes like gestalt perception, attention and memory have led to the notion that GBA may reflect the activation of internal object representations. However, there is little direct evidence suggesting that GBA is related to subjective perceptual experience. In the present study, magnetoencephalogram was recorded during an audiovisual oddball paradigm with infrequent visual (auditory /ta/ + visual /pa/) or acoustic deviants (auditory /pa/ + visual /ta/) interspersed in a sequence of frequent audiovisual standard stimuli (auditory /ta/ + visual /ta/). Sixteen human subjects had to respond to perceived acoustic changes which could be produced either by real acoustic or illusory (visual) deviants. Statistical probability mapping served to identify correlations between oscillatory activity in response to visual and acoustic deviants, respectively, and the detection rates for either type of deviant. The perception of illusory acoustic changes induced by visual deviants was closely associated with gamma-band amplitude at approximately 80 Hz between 250 and 350 ms over midline occipital cortex. In contrast, the detection of real acoustic deviants correlated positively with induced GBA at approximately 42 Hz between 200 and 300 ms over left superior temporal cortex and negatively with evoked gamma responses at approximately 41 Hz between 220 and 240 ms over occipital areas. These findings support the relevance of high-frequency oscillatory activity over early sensory areas for perceptual experience.  相似文献   

16.
Amidst a barrage of sensory information in the environment, the impact that individual stimuli have on our behaviour is thought to depend on the outcome of competition that occurs within and between multiple brain regions. Although biased competition models of attention have been tested in visual cortices and to a lesser extent in auditory cortex, little is known about the nature of stimulus competition outside of sensory areas. Given the hypothesized role of multiple pathways (cortical and subcortical) and specialized brain regions for processing valence information, studies involving conflicting basic emotional stimuli provide a unique opportunity to examine whether the principles of biased competition apply outside of sensory cortex. We used fMRI to examine the neural representation and resolution of emotional conflict in a sample of healthy individuals. Participants made explicit judgments about the valence of happy or fearful target facial expressions in the context of emotionally congruent, neutral, or incongruent distracters. The results suggest that emotional conflict is reflected in a dissociable manner across distinct neural regions. Posterior areas of visual cortex showed enhanced responding to congruent relative to neutral or incongruent stimuli. Orbitofrontal cortex remained sensitive to positive affect in the context of conflicting emotional stimuli. In contrast, within the amygdala, activity associated with identifying positive target expressions declined with the introduction of neutral and incongruent expressions; however, activity associated with fearful target expressions was less susceptible to the influence of emotional context. Enhanced functional connectivity was observed between medial prefrontal cortex and the amygdala during incongruent trials; the degree of connectivity was correlated with reaction time costs incurred during incongruent trials. The results are interpreted with reference to current models of emotional attention and regulation.  相似文献   

17.
Emotional communication is essential for successful social interactions. Emotional information can be expressed at verbal and nonverbal levels. If the verbal message contradicts the nonverbal expression, usually the nonverbal information is perceived as being more authentic, revealing the "true feelings" of the speaker. The present fMRI study investigated the cerebral integration of verbal (sentences expressing the emotional state of the speaker) and nonverbal (facial expressions and tone of voice) emotional signals using ecologically valid audiovisual stimulus material. More specifically, cerebral activation associated with the relative impact of nonverbal information on judging the affective state of a speaker (individual nonverbal dominance index, INDI) was investigated. Perception of nonverbally expressed emotions was associated with bilateral activation within the amygdala, fusiform face area (FFA), temporal voice area (TVA), and the posterior temporal cortex as well as in the midbrain and left inferior orbitofrontal cortex (OFC)/left insula. Verbally conveyed emotions were linked to increased responses bilaterally in the TVA. Furthermore, the INDI correlated with responses in the left amygdala elicited by nonverbal and verbal emotional stimuli. Correlation of the INDI with the activation within the medial OFC was observed during the processing of communicative signals. These results suggest that individuals with a higher degree of nonverbal dominance have an increased sensitivity not only to nonverbal but to emotional stimuli in general.  相似文献   

18.
Schürmann M  Raij T  Fujiki N  Hari R 《NeuroImage》2002,16(2):434-440
The temporospatial pattern of brain activity during auditory imagery was studied using magnetoencephalography. Trained musicians were presented with visual notes and instructed to imagine the corresponding sounds. Brain activity specific to the auditory imagery task was observed, first as enhanced activity of left and right occipital areas (average onset 120-150 ms after the onset of the visual stimulus) and then spreading to the midline parietal cortex (precuneus) and to such extraoccipital areas that were not activated during the visual control condition (e.g., the left temporal auditory association cortex and the left and right premotor cortices). The latest activations, with average onset latencies of 270-400 ms clearly separate from the earliest ones, occurred in the left sensorimotor cortex and the right inferotemporal visual association cortex. These data imply a complex temporospatial activation sequence of multiple cortical areas when musicians recall firmly established audiovisual associations.  相似文献   

19.
Taga G  Asakawa K 《NeuroImage》2007,36(4):1246-1252
To better understand the development of multimodal perception, we examined selectivity and localization of cortical responses to auditory and visual stimuli in young infants. Near-infrared optical topography with 24 channels was used to measure event-related cerebral oxygenation changes of the bilateral temporal cortex in 15 infants aged 2 to 4 months, when they were exposed to speech sounds lasting 3 s and checkerboard pattern reversals lasting 3 s, which were asynchronously presented with different alternating intervals. Group analysis revealed focal increases in oxy-hemoglobin and decreases in deoxy-hemoglobin in both hemispheres in response to auditory, but not to visual, stimulation. These results indicate that localized areas of the primary auditory cortex and the auditory association cortex are involved in auditory perception in infants as young as 2 months of age. In contrast to the hypothesis that perception of distinct sensory modalities may not be separated due to cross talk over the immature cortex in young infants, the present study suggests that unrelated visual events do not influence on the auditory perception of awake infants.  相似文献   

20.
The quick detection of dynamic changes in multisensory environments is essential to survive dangerous events and orient attention to informative events. Previous studies have identified multimodal cortical areas activated by changes of visual, auditory, and tactile stimuli. In the present study, we used magnetoencephalography (MEG) to examine time-varying cortical processes responsive to unexpected unimodal changes during continuous multisensory stimulation. The results showed that there were change-driven cortical responses in multimodal areas, such as the temporo-parietal junction and middle and inferior frontal gyri, regardless of the sensory modalities where the change occurred. These multimodal activations accompanied unimodal activations, both of which in general had some peaks within 300 ms after the changes. Thus, neural processes responsive to unimodal changes in the multisensory environment are distributed at different timing in these cortical areas.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号