首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Effects of spatial congruity on audio-visual multimodal integration   总被引:3,自引:0,他引:3  
Spatial constraints on multisensory integration of auditory (A) and visual (V) stimuli were investigated in humans using behavioral and electrophysiological measures. The aim was to find out whether cross-modal interactions between A and V stimuli depend on their spatial congruity, as has been found for multisensory neurons in animal studies (Stein & Meredith, 1993). Randomized sequences of unimodal (A or V) and simultaneous bimodal (AV) stimuli were presented to right- or left-field locations while subjects made speeded responses to infrequent targets of greater intensity that occurred in either or both modalities. Behavioral responses to the bimodal stimuli were faster and more accurate than to the unimodal stimuli for both same-location and different-location AV pairings. The neural basis of this cross-modal facilitation was studied by comparing event-related potentials (ERPs) to the bimodal AV stimuli with the summed ERPs to the unimodal A and V stimuli. These comparisons revealed neural interactions localized to the ventral occipito-temporal cortex (at 190 msec) and to the superior temporal cortical areas (at 260 msec) for both same- and different-location AV pairings. In contrast, ERP interactions that differed according to spatial congruity included a phase and amplitude modulation of visual-evoked activity localized to the ventral occipito-temporal cortex at 100-400 msec and an amplitude modulation of activity localized to the superior temporal region at 260-280 msec. These results demonstrate overlapping but distinctive patterns of multisensory integration for spatially congruent and incongruent AV stimuli.  相似文献   

2.
We investigated neural representations for visual perception of 10 handwritten digits and six visual objects from a convolutional neural network (CNN) and humans using functional magnetic resonance imaging (fMRI). Once our CNN model was fine‐tuned using a pre‐trained VGG16 model to recognize the visual stimuli from the digit and object categories, representational similarity analysis (RSA) was conducted using neural activations from fMRI and feature representations from the CNN model across all 16 classes. The encoded neural representation of the CNN model exhibited the hierarchical topography mapping of the human visual system. The feature representations in the lower convolutional (Conv) layers showed greater similarity with the neural representations in the early visual areas and parietal cortices, including the posterior cingulate cortex. The feature representations in the higher Conv layers were encoded in the higher‐order visual areas, including the ventral/medial/dorsal stream and middle temporal complex. The neural representations in the classification layers were observed mainly in the ventral stream visual cortex (including the inferior temporal cortex), superior parietal cortex, and prefrontal cortex. There was a surprising similarity between the neural representations from the CNN model and the neural representations for human visual perception in the context of the perception of digits versus objects, particularly in the primary visual and associated areas. This study also illustrates the uniqueness of human visual perception. Unlike the CNN model, the neural representation of digits and objects for humans is more widely distributed across the whole brain, including the frontal and temporal areas.  相似文献   

3.
From which regions of the brain do conscious representations of visual stimuli emerge? This is an important but controversial issue in neuroscience because some studies have reported a major role of the higher visual regions of the ventral pathway in conscious perception, whereas others have found neural correlates of consciousness as early as in the primary visual areas and in the thalamus. One reason for this controversy has been the difficulty in focusing on neural activity at the moment when conscious percepts are generated in the brain, excluding any bottom-up responses (not directly related to consciousness) that are induced by stimuli. In this study, we address this issue with a new approach that can induce a rapid change in conscious perception with little influence from bottom-up responses. Our results reveal that the first consciousness-related activity emerges from the higher visual region of the ventral pathway. However, this activity is rapidly diffused to the entire brain, including the early visual cortex. These results thus integrate previous "higher" and "lower" views on the emergence of neural correlates of consciousness, providing a new perspective for the temporal dynamics of consciousness.  相似文献   

4.
Magnetic coil (MC) stimulation percutaneously of human occipital cortex was tested on perception of 3 briefly presented, randomly generated alphabetical characters. When the visual stimulus-MC pulse interval was less than 40-60 msec, or more than 120-140 msec, letters were correctly reported; at test intervals of 80-100 msec, a blur or nothing was seen. Shifting the MC location in the transverse and rostro-caudal axes had effects consistent with the topographical representation in visual cortex, but incompatible with an effect on attention or suppression from an eyeblink. The MC pulse probably acts by eliciting IPSPs in visual cortex. The neural activity subserving letter recognition is probably transmitted from visual cortex within 140 msec of the visual stimulus.  相似文献   

5.
Form-from-motion: MEG evidence for time course and processing sequence   总被引:9,自引:0,他引:9  
The neural mechanisms and role of attention in the processing of visual form defined by luminance or motion cues were studied using magnetoencephalography. Subjects viewed bilateral stimuli composed of moving random dots and were instructed to covertly attend to either left or right hemifield stimuli in order to detect designated target stimuli that required a response. To generate form-from-motion (FFMo) stimuli, a subset of the dots could begin to move coherently to create the appearance of a simple form (e.g., square). In other blocks, to generate form-from-luminance (FFLu) stimuli that served as a control, a gray stimulus was presented superimposed on the randomly moving dots. Neuromagnetic responses were observed to both the FFLu and FFMo stimuli and localized to multiple visual cortical stages of analysis. Early activity in low-level visual cortical areas (striate/early extrastriate) did not differ for FFLu versus FFMo stimuli, nor as a function of spatial attention. Longer latency responses elicited by the FFLu stimuli were localized to the ventral-lateral occipital cortex (LO) and the inferior temporal cortex (IT). The FFMo stimuli also generated activity in the LO and IT, but only after first eliciting activity in the lateral occipital cortical region corresponding to MT/V5, resulting in a 50-60 msec delay in activity. All of these late responses (MT/V5, LO, and IT) were significantly modulated by spatial attention, being greatly attenuated for ignored FFLu and FFMo stimuli. These findings argue that processing of form in IT that is defined by motion requires a serial processing of information, first in the motion analysis pathway from V1 to MT/V5 and thereafter via the form analysis stream in the ventral visual pathway to IT.  相似文献   

6.
Automatic attention to emotional stimuli: neural correlates   总被引:5,自引:0,他引:5  
We investigated the capability of emotional and nonemotional visual stimulation to capture automatic attention, an aspect of the interaction between cognitive and emotional processes that has received scant attention from researchers. Event-related potentials were recorded from 37 subjects using a 60-electrode array, and were submitted to temporal and spatial principal component analyses to detect and quantify the main components, and to source localization software (LORETA) to determine their spatial origin. Stimuli capturing automatic attention were of three types: emotionally positive, emotionally negative, and nonemotional pictures. Results suggest that initially (P1: 105 msec after stimulus), automatic attention is captured by negative pictures, and not by positive or nonemotional ones. Later (P2: 180 msec), automatic attention remains captured by negative pictures, but also by positive ones. Finally (N2: 240 msec), attention is captured only by positive and nonemotional stimuli. Anatomically, this sequence is characterized by decreasing activation of the visual association cortex (VAC) and by the growing involvement, from dorsal to ventral areas, of the anterior cingulate cortex (ACC). Analyses suggest that the ACC and not the VAC is responsible for experimental effects described above. Intensity, latency, and location of neural activity related to automatic attention thus depend clearly on the stimulus emotional content and on its associated biological importance.  相似文献   

7.
Speech perception in noisy environments is enhanced by seeing facial movements of communication partners. However, the neural mechanisms by which audio and visual speech are combined are not fully understood. We explore MEG phase-locking to auditory and visual signals in MEG recordings from 14 human participants (6 females, 8 males) that reported words from single spoken sentences. We manipulated the acoustic clarity and visual speech signals such that critical speech information is present in auditory, visual, or both modalities. MEG coherence analysis revealed that both auditory and visual speech envelopes (auditory amplitude modulations and lip aperture changes) were phase-locked to 2-6 Hz brain responses in auditory and visual cortex, consistent with entrainment to syllable-rate components. Partial coherence analysis was used to separate neural responses to correlated audio-visual signals and showed non-zero phase-locking to auditory envelope in occipital cortex during audio-visual (AV) speech. Furthermore, phase-locking to auditory signals in visual cortex was enhanced for AV speech compared with audio-only speech that was matched for intelligibility. Conversely, auditory regions of the superior temporal gyrus did not show above-chance partial coherence with visual speech signals during AV conditions but did show partial coherence in visual-only conditions. Hence, visual speech enabled stronger phase-locking to auditory signals in visual areas, whereas phase-locking of visual speech in auditory regions only occurred during silent lip-reading. Differences in these cross-modal interactions between auditory and visual speech signals are interpreted in line with cross-modal predictive mechanisms during speech perception.SIGNIFICANCE STATEMENT Verbal communication in noisy environments is challenging, especially for hearing-impaired individuals. Seeing facial movements of communication partners improves speech perception when auditory signals are degraded or absent. The neural mechanisms supporting lip-reading or audio-visual benefit are not fully understood. Using MEG recordings and partial coherence analysis, we show that speech information is used differently in brain regions that respond to auditory and visual speech. While visual areas use visual speech to improve phase-locking to auditory speech signals, auditory areas do not show phase-locking to visual speech unless auditory speech is absent and visual speech is used to substitute for missing auditory signals. These findings highlight brain processes that combine visual and auditory signals to support speech understanding.  相似文献   

8.
How does the brain represent the passage of time at the subsecond scale? Although different conceptual models for time perception have been proposed, its neurophysiological basis remains unknown. We took advantage of a visual duration illusion produced by stimulus novelty to link changes in cortical activity in monkeys with distortions of duration perception in humans. We found that human subjects perceived the duration of a subsecond motion pulse with a novel direction longer than a motion pulse with a repeated direction. Recording from monkeys viewing identical motion stimuli but performing a different behavioral task, we found that both the duration and amplitude of the neural response in the middle temporal area of visual cortex were positively correlated with the degree of novelty of the motion direction. In contrast to previous accounts that attribute distortions in duration perception to changes in the speed of a putative internal clock, our results suggest that the known adaptive properties of neural activity in visual cortex contributes to subsecond temporal distortions.  相似文献   

9.
Cortical signals associated with visual imagery of letters were recorded from 10 healthy adults with a whole-scalp 122-channel neuromagnetometer. The auditory stimulus sequence consisted of 20 different phonemes corresponding to single letters of the Roman alphabet and of tone pips (17%), delivered once every 1.5 sec in a random order. The subjects were instructed to visually imagine the letter corresponding to the auditory stimulus and to examine its visuospatial properties: The associated brain activity was compared with activity evoked by the same stimuli when the subjects just detected the intervening tones. All subjects produced broad imagery-related responses over multiple cortical regions. After initial activation of the auditory cortices, the earliest imagery-related responses originated in the left prerolandic area 320 msec after the voice onset. They were followed within 70 msec by signals originating in the posterior parietal lobe close to midline (precuneus) and, 100 msec later, in the posterior superior temporal areas, predominantly in the left hemisphere. The activations were sustained and partially overlapping in time. Imagery-related activity in the left lateral occipital cortex was observed in two subjects, and weak late activity in the calcarine cortex in one subject. Real audiovisually presented letters activated multiple brain regions, and task-induced visuospatial processing of these stimuli further increased activity in some of these regions and activated additional areas: Some of these areas were activated during imagery as well. The results suggest that certain brain areas involved in high-level visual perception are activated during visual imagery and that the extent of imagery-related activity is dictated by the requirements of the stimuli and the task.  相似文献   

10.
This study examined whether differential neural responses are evoked by emotional stimuli with and without conscious perception, in a patient with visual neglect and extinction. Stimuli were briefly shown in either right, left, or both fields during event-related fMRI. On bilateral trials, either a fearful or neutral left face appeared with a right house, and it could either be extinguished from awareness or perceived. Seen faces in left visual field (LVF) activated primary visual cortex in the damaged right-hemisphere and bilateral fusiform gyri. Extinguished left faces increased activity in striate and extrastriate cortex, compared with right houses only. Critically, fearful faces activated the left amygdala and extrastriate cortex both when seen and when extinguished; as well as bilateral orbitofrontal and intact right superior parietal areas. Comparison of perceived versus extinguished faces revealed no difference in amygdala for fearful faces. Conscious perception increased activity in fusiform, parietal and prefrontal areas of the left-hemisphere, irrespective of emotional expression; while a differential emotional response to fearful faces occurring specifically with awareness was found in bilateral parietal, temporal, and frontal areas. These results demonstrate that amygdala and orbitofrontal cortex can be activated by emotional stimuli even without awareness after parietal damage; and that substantial unconscious residual processing can occur within spared brain areas well beyond visual cortex, despite neglect and extinction.  相似文献   

11.
At any given moment our sensory systems receive multiple, often rhythmic, inputs from the environment. Processing of temporally structured events in one sensory modality can guide both behavioral and neural processing of events in other sensory modalities, but whether this occurs remains unclear. Here, we used human electroencephalography (EEG) to test the cross-modal influences of a continuous auditory frequency-modulated (FM) sound on visual perception and visual cortical activity. We report systematic fluctuations in perceptual discrimination of brief visual stimuli in line with the phase of the FM-sound. We further show that this rhythmic modulation in visual perception is related to an accompanying rhythmic modulation of neural activity recorded over visual areas. Importantly, in our task, perceptual and neural visual modulations occurred without any abrupt and salient onsets in the energy of the auditory stimulation and without any rhythmic structure in the visual stimulus. As such, the results provide a critical validation for the existence and functional role of cross-modal entrainment and demonstrates its utility for organizing the perception of multisensory stimulation in the natural environment.SIGNIFICANCE STATEMENT Our sensory environment is filled with rhythmic structures that are often multi-sensory in nature. Here, we show that the alignment of neural activity to the phase of an auditory frequency-modulated (FM) sound has cross-modal consequences for vision: yielding systematic fluctuations in perceptual discrimination of brief visual stimuli that are mediated by accompanying rhythmic modulation of neural activity recorded over visual areas. These cross-modal effects on visual neural activity and perception occurred without any abrupt and salient onsets in the energy of the auditory stimulation and without any rhythmic structure in the visual stimulus. The current work shows that continuous auditory fluctuations in the natural environment can provide a pacing signal for neural activity and perception across the senses.  相似文献   

12.
The auditory and visual perception systems have developed special processing strategies for ecologically valid motion stimuli, utilizing some of the statistical properties of the real world. A well-known example is the perception of biological motion, for example, the perception of a human walker. The aim of the current study was to identify the cortical network involved in the integration of auditory and visual biological motion signals. We first determined the cortical regions of auditory and visual coactivation (Experiment 1); a conjunction analysis based on unimodal brain activations identified four regions: middle temporal area, inferior parietal lobule, ventral premotor cortex, and cerebellum. The brain activations arising from bimodal motion stimuli (Experiment 2) were then analyzed within these regions of coactivation. Auditory footsteps were presented concurrently with either an intact visual point-light walker (biological motion) or a scrambled point-light walker; auditory and visual motion in depth (walking direction) could either be congruent or incongruent. Our main finding is that motion incongruency (across modalities) increases the activity in the ventral premotor cortex, but only if the visual point-light walker is intact. Our results extend our current knowledge by providing new evidence consistent with the idea that the premotor area assimilates information across the auditory and visual modalities by comparing the incoming sensory input with an internal representation.  相似文献   

13.
The speeding-up of neural processing associated with attended events (i.e., the prior-entry effect) has long been proposed as a viable mechanism by which attention can prioritize our perception and action. In the brain, this has been thought to be regulated through a sensory gating mechanism, increasing the amplitudes of early evoked potentials while leaving their latencies unaffected. However, the majority of previous research has emphasized speeded responding and has failed to emphasize fine temporal discrimination, thereby potentially lacking the sensitivity to reveal putative modulations in the timing of neural processing. In the present study, we used a cross-modal temporal order judgment task while shifting attention between the visual and tactile modalities to investigate the mechanisms underlying selective attention electrophysiologically. Our results indicate that attention can indeed speed up neural processes during visual perception, thereby providing the first electrophysiological support for the existence of prior entry.  相似文献   

14.
Observing a speaker's articulatory gestures can contribute considerably to auditory speech perception. At the level of neural events, seen articulatory gestures can modify auditory cortex responses to speech sounds and modulate auditory cortex activity also in the absence of heard speech. However, possible effects of attention on this modulation have remained unclear. To investigate the effect of attention on visual speech-induced auditory cortex activity, we scanned 10 healthy volunteers with functional magnetic resonance imaging (fMRI) at 3 T during simultaneous presentation of visual speech gestures and moving geometrical forms, with the instruction to either focus on or ignore the seen articulations. Secondary auditory cortex areas in the bilateral posterior superior temporal gyrus and planum temporale were active both when the articulatory gestures were ignored and when they were attended to. However, attention to visual speech gestures enhanced activity in the left planum temporale compared to the situation when the subjects saw identical stimuli but engaged in a nonspeech motion discrimination task. These findings suggest that attention to visually perceived speech gestures modulates auditory cortex function and that this modulation takes place at a hierarchically relatively early processing level.  相似文献   

15.
In everyday life, temporal information is used for both perception and action, but whether these two functions reflect the operation of similar or different neural circuits is unclear. We used functional magnetic resonance imaging to investigate the neural correlates of processing temporal information when either a motor or a perceptual representation is used. Participants viewed two identical sequences of visual stimuli and used the information differently to perform either a temporal reproduction or a temporal estimation task. By comparing brain activity evoked by these tasks and control conditions, we explored commonalities and differences in brain areas involved in reproduction and estimation of temporal intervals. The basal ganglia and the cerebellum were commonly active in both temporal tasks, consistent with suggestions that perception and production of time are subserved by the same mechanisms. However, only in the reproduction task was activity observed in a wider cortical network including the right pre-SMA, left middle frontal gyrus, left premotor cortex, with a more reliable activity in the right inferior parietal cortex, left fusiform gyrus, and the right extrastriate visual area V5/MT. Our findings point to a role for the parietal cortex as an interface between sensory and motor processes and suggest that it may be a key node in translation of temporal information into action. Furthermore, we discuss the potential importance of the extrastriate cortex in processing visual time in the context of recent findings.  相似文献   

16.
Visual detection of body motion is of immense importance for daily-life activities and social nonverbal interaction. Although neurobiological mechanisms underlying visual processing of human locomotion are being explored extensively by brain imaging, the role of structural brain connectivity is not well understood. Here we investigate cortical evoked neuromagnetic response to point-light body motion in healthy adolescents and in patients with early periventricular lesions, periventricular leukomalacia (PVL), that disrupt brain connectivity. In a simultaneous masking paradigm, participants detected the presence of a point-light walker embedded in a few sets of spatially scrambled dots on the joints of a walker. The visual sensitivity to camouflaged human locomotion was lower in PVL patients. In accord with behavioral data, root-mean-square (RMS) amplitude of neuromagnetic trace in response to human locomotion was lower in PVL patients at latencies of 180-244 msec over the right temporal cortex. In this time window, the visual sensitivity to body motion in controls, but not in PVL patients, was inversely linked to the right temporal activation. At later latencies of 276-340 msec, we found reduction in RMS amplitude in PVL patients for body motion stimuli over the right frontal cortex. The findings indicate that disturbances in brain connectivity with the right temporal cortex, a key node of the social brain, and with the right frontal cortex lead to disintegration of the neural network engaged in visual processing of body motion. We suspect that reduced cortical response to body motion over the right temporal and frontal cortices might underlie deficits in visual social cognition.  相似文献   

17.
Voluntary attention changes the speed of perceptual neural processing   总被引:1,自引:0,他引:1  
While previous studies in psychology demonstrated that humans can respond more quickly to the stimuli at attended than unattended locations, it remains unclear whether attention also accelerates the speed of perceptual neural activity in the human brain. One possible reason for this unclarity would be an insufficient spatial resolution of previous electroencephalography (EEG) and magnetoencephalography (MEG) techniques in which neural signals from multiple brain regions are merged with each other. Here, we addressed this issue by combining MEG with a novel stimulus-presentation technique that can focus on neural signals from higher visual cortex where the magnitude of attentional modulation is prominent. Results revealed that the allocation of spatial attention induces both an increase in neural intensity (attentional enhancement) and a decrease in neural latency (attentional acceleration) to the attended compared to unattended visual stimuli (Experiment 1). Furthermore, an attention-induced behavioural facilitation reported in previous psychological studies (Posner paradigm) was closely correlated with the neural 'acceleration' rather than 'enhancement' in the visual cortex (Experiment 2). In addition to bridging a gap between previous psychological and neurological findings, our results demonstrated a temporal dynamics of attentional modulation in the human brain.  相似文献   

18.
Research into the neural mechanisms of attention has revealed a complex network of brain regions that are involved in the execution of attention-demanding tasks. Recent advances in human neuroimaging now permit investigation of the elementary processes of attention being subserved by specific components of the brain's attention system. Here we describe recent studies of spatial selective attention that made use of positron emission tomography (PET), functional magnetic resonance imaging (fMRI), and event-related brain potentials (ERPs) to investigate the spatio-temporal dynamics of the attention-related neural activity. We first review the results from an event-related fMRI study that examined the neural mechanisms underlying top-down attentional control versus selective sensory perception. These results defined a fronto-temporal-parietal network involved in the control of spatial attention. Activity in these areas biased the neural activity in sensory brain structures coding the spatial locations of upcoming target stimuli, preceding a modulation of subsequent target processing in visual cortex. We then present preliminary evidence from a fast-rate event-related fMRI study of spatial attention that demonstrates how to disentangle the potentially overlapping hemodynamic responses elicited by temporally adjacent stimuli in studies of attentional control. Finally, we present new analyses from combined neuroimaging (PET) and event-related brain potential (ERP) studies that together reveal the timecourse of activation of brain regions implicated in attentional control and selective perception.  相似文献   

19.
The purpose of this study was to determine the functional organization of the human brain involved in cross-modal discrimination between tactile and visual information. Regional cerebral blood flow was measured by positron emission tomography in nine right-handed volunteers during four discrimination tasks; tactile-tactile (TT), tactile-visual (TV), visual-tactile (VT), and visual-visual (VV). The subjects were asked either to look at digital cylinders of different diameters or to grasp the digital cylinders with the thumb and index finger of the right hand using haptic interfaces. Compared with the motor control task in which the subjects looked at and grasped cylinders of the same diameter, the right lateral prefrontal cortex and the right inferior parietal lobule were activated in all the four discrimination tasks. In addition, the dorsal premotor cortex, the ventral premotor cortex, and the inferior temporal cortex of the right hemisphere were activated during VT but not during TV. Our results suggest that the human brain mechanisms underlying cross-modal discrimination have two different pathways depending on the temporal order in which stimuli are presented.  相似文献   

20.
Brain areas activated by stimuli in the left visual field of a right parietal patient suffering from left visual extinction were identified using event-related functional magnetic resonance imaging. Left visual field stimuli that were extinguished from awareness still activated the ventral visual cortex, including areas in the damaged right hemisphere. An extinguished face stimulus on the left produced robust category-specific activation of the right fusiform face area. On trials where the left visual stimulus was consciously seen rather than extinguished, greater activity was found in the ventral visual cortex of the damaged hemisphere, and also in frontal and parietal areas of the intact hemisphere. These findings extend recent observations on visual extinction, suggesting distinct neural correlates for conscious and unconscious perception.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号