首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Real‐world objects approaching or passing by an observer often generate visual, auditory, and tactile signals with different onsets and durations. Prompt detection and avoidance of an impending threat depend on precise binding of looming signals across modalities. Here we constructed a multisensory apparatus to study the spatiotemporal integration of looming visual and tactile stimuli near the face. In a psychophysical experiment, subjects assessed the subjective synchrony between a looming ball and an air puff delivered to the same side of the face with a varying temporal offset. Multisensory stimuli with similar onset times were perceived as completely out of sync and assessed with the lowest subjective synchrony index (SSI). Across subjects, the SSI peaked at an offset between 800 and 1,000 ms, where the multisensory stimuli were perceived as optimally in sync. In an fMRI experiment, tactile, visual, tactile‐visual out‐of‐sync (TVoS), and tactile‐visual in‐sync (TViS) stimuli were delivered to either side of the face in randomized events. Group‐average statistical responses to different stimuli were compared within each surface‐based region of interest (sROI) outlined on the cortical surface. Most sROIs showed a preference for contralateral stimuli and higher responses to multisensory than unisensory stimuli. In several bilateral sROIs, particularly the human MT+ complex and V6A, responses to spatially aligned multisensory stimuli (TVoS) were further enhanced when the stimuli were in‐sync (TViS), as expressed by TVoS < TViS. This study demonstrates the perceptual and neural mechanisms of multisensory integration near the face, which has potential applications in the development of multisensory entertainment systems and media.  相似文献   

2.
An object's motion relative to an observer can confer ethologically meaningful information. Approaching or looming stimuli can signal threats/collisions to be avoided or prey to be confronted, whereas receding stimuli can signal successful escape or failed pursuit. Using movement detection and subjective ratings, we investigated the multisensory integration of looming and receding auditory and visual information by humans. While prior research has demonstrated a perceptual bias for unisensory and more recently multisensory looming stimuli, none has investigated whether there is integration of looming signals between modalities. Our findings reveal selective integration of multisensory looming stimuli. Performance was significantly enhanced for looming stimuli over all other multisensory conditions. Contrasts with static multisensory conditions indicate that only multisensory looming stimuli resulted in facilitation beyond that induced by the sheer presence of auditory-visual stimuli. Controlling for variation in physical energy replicated the advantage for multisensory looming stimuli. Finally, only looming stimuli exhibited a negative linear relationship between enhancement indices for detection speed and for subjective ratings. Maximal detection speed was attained when motion perception was already robust under unisensory conditions. The preferential integration of multisensory looming stimuli highlights that complex ethologically salient stimuli likely require synergistic cooperation between existing principles of multisensory integration. A new conceptualization of the neurophysiologic mechanisms mediating real-world multisensory perceptions and action is therefore supported.  相似文献   

3.
Behavioral and brain responses to identical stimuli can vary with experimental and task parameters, including the context of stimulus presentation or attention. More surprisingly, computational models suggest that noise-related random fluctuations in brain responses to stimuli would alone be sufficient to engender perceptual differences between physically identical stimuli. In two experiments combining psychophysics and EEG in healthy humans, we investigated brain mechanisms whereby identical stimuli are (erroneously) perceived as different (higher vs lower in pitch or longer vs shorter in duration) in the absence of any change in the experimental context. Even though, as expected, participants' percepts to identical stimuli varied randomly, a classification algorithm based on a mixture of Gaussians model (GMM) showed that there was sufficient information in single-trial EEG to reliably predict participants' judgments of the stimulus dimension. By contrasting electrical neuroimaging analyses of auditory evoked potentials (AEPs) to the identical stimuli as a function of participants' percepts, we identified the precise timing and neural correlates (strength vs topographic modulations) as well as intracranial sources of these erroneous perceptions. In both experiments, AEP differences first occurred ~100 ms after stimulus onset and were the result of topographic modulations following from changes in the configuration of active brain networks. Source estimations localized the origin of variations in perceived pitch of identical stimuli within right temporal and left frontal areas and of variations in perceived duration within right temporoparietal areas. We discuss our results in terms of providing neurophysiologic evidence for the contribution of random fluctuations in brain activity to conscious perception.  相似文献   

4.
The aim of this study was to investigate neural dynamics of audiovisual temporal fusion processes in 6-month-old infants using event-related brain potentials (ERPs). In a habituation-test paradigm, infants did not show any behavioral signs of discrimination of an audiovisual asynchrony of 200 ms, indicating perceptual fusion. In a subsequent EEG experiment, audiovisual synchronous stimuli and stimuli with a visual delay of 200 ms were presented in random order. In contrast to the behavioral data, brain activity differed significantly between the two conditions. Critically, N1 and P2 latency delays were not observed between synchronous and fused items, contrary to previously observed N1 and P2 latency delays between synchrony and perceived asynchrony. Hence, temporal interaction processes in the infant brain between the two sensory modalities varied as a function of perceptual fusion versus asynchrony perception. The visual recognition components Pb and Nc were modulated prior to sound onset, emphasizing the importance of anticipatory visual events for the prediction of auditory signals. Results suggest mechanisms by which young infants predictively adjust their ongoing neural activity to the temporal synchrony relations to be expected between vision and audition.  相似文献   

5.
The synchronous occurrence of the unisensory components of a multisensory stimulus contributes to their successful merging into a coherent perceptual representation. Oscillatory gamma-band responses (GBRs, 30-80 Hz) have been linked to feature integration mechanisms and to multisensory processing, suggesting they may also be sensitive to the temporal alignment of multisensory stimulus components. Here we examined the effects on early oscillatory GBR brain activity of varying the precision of the temporal synchrony of the unisensory components of an audio-visual stimulus. Audio-visual stimuli were presented with stimulus onset asynchronies ranging from -125 to +125 ms. Randomized streams of auditory (A), visual (V), and audio-visual (AV) stimuli were presented centrally while subjects attended to either the auditory or visual modality to detect occasional targets. GBRs to auditory and visual components of multisensory AV stimuli were extracted for five subranges of asynchrony (e.g., A preceded by V by 100+/-25 ms, by 50+/-25 ms, etc.) and compared with GBRs to unisensory control stimuli. Robust multisensory interactions were observed in the early GBRs when the auditory and visual stimuli were presented with the closest synchrony. These effects were found over medial-frontal brain areas after 30-80 ms and over occipital brain areas after 60-120 ms. A second integration effect, possibly reflecting the perceptual separation of the two sensory inputs, was found over occipital areas when auditory inputs preceded visual by 100+/-25 ms. No significant interactions were observed for the other subranges of asynchrony. These results show that the precision of temporal synchrony can have an impact on early cross-modal interactions in human cortex.  相似文献   

6.
Thorne JD  Debener S 《Neuroreport》2008,19(5):553-557
Multisensory behavioral benefits generally occur when one modality provides improved or disambiguating information to another. Here, we show benefits when no information is apparently provided. Participants performed an auditory frequency discrimination task in which auditory stimuli were paired with uninformative visual stimuli. Visual-auditory stimulus onset asynchrony was varied between -10 ms (sound first) to 80 ms without compromising perceptual simultaneity. In most stimulus onset asynchrony conditions, response times to audiovisual pairs were significantly shorter than auditory-alone controls. This suggests a general processing advantage for multisensory stimuli over unisensory stimuli, even when only one modality is informative. Response times were shortest with an auditory delay of 65 ms, indicating an audiovisual 'perceptual optimum' that may be related to processing simultaneity.  相似文献   

7.
Sensorimotor co-ordination in mammals is achieved predominantly via the activity of the basal ganglia. To investigate the underlying multisensory information processing, we recorded the neuronal responses in the caudate nucleus (CN) and substantia nigra (SN) of anaesthetized cats to visual, auditory or somatosensory stimulation alone and also to their combinations, i.e. multisensory stimuli. The main goal of the study was to ascertain whether multisensory information provides more information to the neurons than do the individual sensory components. A majority of the investigated SN and CN multisensory units exhibited significant cross-modal interactions. The multisensory response enhancements were either additive or superadditive; multisensory response depressions were also detected. CN and SN cells with facilitatory and inhibitory interactions were found in each multisensory combination. The strengths of the multisensory interactions did not differ in the two structures. A significant inverse correlation was found between the strengths of the best unimodal responses and the magnitudes of the multisensory response enhancements, i.e. the neurons with the weakest net unimodal responses exhibited the strongest enhancement effects. The onset latencies of the responses of the integrative CN and SN neurons to the multisensory stimuli were significantly shorter than those to the unimodal stimuli. These results provide evidence that the multisensory CN and SN neurons, similarly to those in the superior colliculus and related structures, have the ability to integrate multisensory information. Multisensory integration may help in the effective processing of sensory events and the changes in the environment during motor actions controlled by the basal ganglia.  相似文献   

8.
Multisensory integration is essential for the expression of complex behaviors in humans and animals. However, few studies have investigated the neural sites where multisensory integration may occur. Therefore, we used electrophysiology and retrograde labeling to study a region of the rat parietotemporal cortex that responds uniquely to auditory and somatosensory multisensory stimulation. This multisensory responsiveness suggests a functional organization resembling multisensory association cortex in cats and primates. Extracellular multielectrode surface mapping defined a region between auditory and somatosensory cortex where responses to combined auditory/somatosensory stimulation were larger in amplitude and earlier in latency than responses to either stimulus alone. Moreover, multisensory responses were nonlinear and differed from the summed unimodal responses. Intracellular recording found almost exclusively multisensory cells that responded to both unisensory and multisensory stimulation with excitatory postsynaptic potentials (EPSPs) and/or action potentials, conclusively defining a multisensory zone (MZ). In addition, intracellular responses were similar to extracellular recordings, with larger and earlier EPSPs evoked by multisensory stimulation, and interactions suggesting nonlinear postsynaptic summation to combined stimuli. Thalamic input to MZ from unimodal auditory and somatosensory thalamic relay nuclei and from multisensory thalamic regions support the idea that parallel thalamocortical projections may drive multisensory functions as strongly as corticocortical projections. Whereas the MZ integrates uni- and multisensory thalamocortical afferent streams, it may ultimately influence brainstem multisensory structures such as the superior colliculus.  相似文献   

9.
Most perceptual decisions rely on the active acquisition of evidence from the environment involving stimulation from multiple senses. However, our understanding of the neural mechanisms underlying this process is limited. Crucially, it remains elusive how different sensory representations interact in the formation of perceptual decisions. To answer these questions, we used an active sensing paradigm coupled with neuroimaging, multivariate analysis, and computational modeling to probe how the human brain processes multisensory information to make perceptual judgments. Participants of both sexes actively sensed to discriminate two texture stimuli using visual (V) or haptic (H) information or the two sensory cues together (VH). Crucially, information acquisition was under the participants'' control, who could choose where to sample information from and for how long on each trial. To understand the neural underpinnings of this process, we first characterized where and when active sensory experience (movement patterns) is encoded in human brain activity (EEG) in the three sensory conditions. Then, to offer a neurocomputational account of active multisensory decision formation, we used these neural representations of active sensing to inform a drift diffusion model of decision-making behavior. This revealed a multisensory enhancement of the neural representation of active sensing, which led to faster and more accurate multisensory decisions. We then dissected the interactions between the V, H, and VH representations using a novel information-theoretic methodology. Ultimately, we identified a synergistic neural interaction between the two unisensory (V, H) representations over contralateral somatosensory and motor locations that predicted multisensory (VH) decision-making performance.SIGNIFICANCE STATEMENT In real-world settings, perceptual decisions are made during active behaviors, such as crossing the road on a rainy night, and include information from different senses (e.g., car lights, slippery ground). Critically, it remains largely unknown how sensory evidence is combined and translated into perceptual decisions in such active scenarios. Here we address this knowledge gap. First, we show that the simultaneous exploration of information across senses (multi-sensing) enhances the neural encoding of active sensing movements. Second, the neural representation of active sensing modulates the evidence available for decision; and importantly, multi-sensing yields faster evidence accumulation. Finally, we identify a cross-modal interaction in the human brain that correlates with multisensory performance, constituting a putative neural mechanism for forging active multisensory perception.  相似文献   

10.
Multisensory enhancement, as a facilitation phenomenon, is responsible for superior behavioral performance when an individual is responding to cross-modal versus modality-specific stimuli. However, the event-related potential (ERP) counterparts of behavioral multisensory enhancement are not well known. We recorded ERPs and behavioral data from 14 healthy volunteers with three types of target stimuli (modality-specific, bimodal, and trimodal) to examine the spatio-temporal electrophysiological characteristics of multisensory enhancement by comparing behavioral data with ERPs. We found a strong correlation between P3 latency and behavioral performance in terms of reaction time (RT) (R = 0.98, P <0.001), suggesting that P3 latency constitutes a temporal measure of behavioral multisensory enhancement. In addition, a fast RT and short P3 latency were found when comparing the modality-specific visual target with the modality-specific auditory and somatosensory targets. Our results indicate that behavioral multisensory enhancement can be identified by the latency and source distribution of the P3 component. These findings may advance our understanding of the neuronal mechanisms of multisensory enhancement.  相似文献   

11.
Stimuli from multiple sensory organs can be integrated into a coherent representation through multiple phases of multisensory processing; this phenomenon is called multisensory integration. Multisensory integration can interact with attention. Here, we propose a framework in which attention modulates multisensory processing in both endogenous (goal-driven) and exogenous (stimulus-driven) ways. Moreover, multisensory integration exerts not only bottom-up but also top-down control over attention. Specifically, we propose the following: (1) endogenous attentional selectivity acts on multiple levels of multisensory processing to determine the extent to which simultaneous stimuli from different modalities can be integrated; (2) integrated multisensory events exert top-down control on attentional capture via multisensory search templates that are stored in the brain; (3) integrated multisensory events can capture attention efficiently, even in quite complex circumstances, due to their increased salience compared to unimodal events and can thus improve search accuracy; and (4) within a multisensory object, endogenous attention can spread from one modality to another in an exogenous manner.  相似文献   

12.
Multisensory plasticity enables our senses to dynamically adapt to each other and the external environment, a fundamental operation that our brain performs continuously. We searched for neural correlates of adult multisensory plasticity in the dorsal medial superior temporal area (MSTd) and the ventral intraparietal area (VIP) in 2 male rhesus macaques using a paradigm of supervised calibration. We report little plasticity in neural responses in the relatively low-level multisensory cortical area MSTd. In contrast, neural correlates of plasticity are found in higher-level multisensory VIP, an area with strong decision-related activity. Accordingly, we observed systematic shifts of VIP tuning curves, which were reflected in the choice-related component of the population response. This is the first demonstration of neuronal calibration, together with behavioral calibration, in single sessions. These results lay the foundation for understanding multisensory neural plasticity, applicable broadly to maintaining accuracy for sensorimotor tasks.SIGNIFICANCE STATEMENT Multisensory plasticity is a fundamental and continual function of the brain that enables our senses to adapt dynamically to each other and to the external environment. Yet, very little is known about the neuronal mechanisms of multisensory plasticity. In this study, we searched for neural correlates of adult multisensory plasticity in the dorsal medial superior temporal area (MSTd) and the ventral intraparietal area (VIP) using a paradigm of supervised calibration. We found little plasticity in neural responses in the relatively low-level multisensory cortical area MSTd. By contrast, neural correlates of plasticity were found in VIP, a higher-level multisensory area with strong decision-related activity. This is the first demonstration of neuronal calibration, together with behavioral calibration, in single sessions.  相似文献   

13.
Saccadic reaction time to visual targets tends to be faster when stimuli from another modality (in particular, audition and touch) are presented in close temporal or spatial proximity even when subjects are instructed to ignore the accessory input (focused attention task). Multisensory interaction effects measured in neural structures involved in saccade generation (in particular, the superior colliculus) have demonstrated a similar spatio-temporal dependence. Neural network models of multisensory spatial integration have been shown to generate convergence of the visual, auditory, and tactile reference frames and the sensorimotor coordinate transformations necessary for coordinated head and eye movements. However, because these models do not capture the temporal coincidences critical for multisensory integration to occur, they cannot easily predict multisensory effects observed in behavioral data such as saccadic reaction times. This article proposes a quantitative stochastic framework, the time-window-of-integration model, to account for the temporal rules of multisensory integration. Saccadic responses collected from a visual-tactile focused attention task are shown to be consistent with the time-window-of-integration model predictions.  相似文献   

14.
We investigated the time-course and scalp topography of multisensory interactions between simultaneous auditory and somatosensory stimulation in humans. Event-related potentials (ERPs) were recorded from 64 scalp electrodes while subjects were presented with auditory-alone stimulation (1000-Hz tones), somatosensory-alone stimulation (median nerve electrical pulses), and simultaneous auditory-somatosensory (AS) combined stimulation. Interaction effects were assessed by comparing the responses to combined stimulation with the algebraic sum of responses to the constituent auditory and somatosensory stimuli when they were presented alone. Spatiotemporal analysis of ERPs and scalp current density (SCD) topographies revealed AS interaction over the central/postcentral scalp which onset at approximately 50 ms post-stimulus presentation. Both the topography and timing of these interactions are consistent with multisensory integration early in the cortical processing hierarchy, in brain regions traditionally held to be unisensory.  相似文献   

15.
Holmes NP 《Neuropsychologia》2007,45(14):3340-3345
Multisensory research is often interpreted according to three rules: the spatial rule, the temporal rule, and the law of inverse effectiveness. The spatial and temporal rules state that multisensory stimuli are integrated when their environmental sources occur at similar locations and times, respectively. The law of inverse effectiveness states that multisensory stimuli are integrated inversely proportional to the effectiveness of the best unisensory response. Neurally, these rules are grounded in anatomical and physiological mechanisms. By contrast, behavioural evidence often contradicts these rules, and direct links between multisensory neurons and multisensory behaviour remain unclear. This note discusses evidence supporting the law of inverse effectiveness, and reports a simulation of a behavioural experiment recently published in Neuropsychologia. The simulation reveals an alternative, statistical, explanation for the data. I conclude that the law of inverse effectiveness only sometimes applies, and that the choice of statistical analysis can have profound effects on whether the data abides the law.  相似文献   

16.
Kate A. Yurgil 《Neuropsychologia》2010,48(10):2952-2958
The neural basis of conscious perception can be studied using stimuli that elicit different percepts on different occasions (multistable perception). Multistable perception allows direct comparisons between brain activity and conscious perception that control for sensory input, and also serves as a model for attentional competition, with the winning perceptual outcome varying across trials. Dichotic listening tasks present multistable stimuli consisting of two different consonant-vowels (CVs, one/ear). For each trial one ear usually conveys the dominant percept. We used EEG to measure neural activity before and after dichotic stimulus presentation to compare activity among left vs. right ear percepts and a control task. Consonant-vowels were perceived more often to the right vs. left ear. Pre-stimulus EEG power in the beta band (16-20 Hz) increased for left compared to right ear percepts and control trials. Event-related potentials after stimulus onset showed smaller P50 amplitudes (∼50 ms latency) for left ear compared to right ear and control trials. Results indicate that neural activity for right ear percepts is comparable to control conditions, while activity for the atypical left ear percept differs before and after stimulus onset. Pre-stimulus EEG changes for left ear percepts may indicate a mechanism of spontaneous fluctuations in cortical networks that bias attentional competition during subsequent sensory processing. The P50 amplitude differences among perceived ears suggests that rapid sensory and/or arousal-related activities contribute to the content of conscious perception, possibly by biasing attentional competition away from the dominant right ear channel.  相似文献   

17.
Integration of information from multiple senses is fundamental to perception and cognition, but when and where this is accomplished in the brain is not well understood. This study examined the timing and topography of cortical auditory-visual interactions using high-density event-related potentials (ERPs) during a simple reaction-time (RT) task. Visual and auditory stimuli were presented alone and simultaneously. ERPs elicited by the auditory and visual stimuli when presented alone were summed ('sum' ERP) and compared to the ERP elicited when they were presented simultaneously ('simultaneous' ERP). Divergence between the 'simultaneous' and 'sum' ERP indicated auditory-visual (AV) neural response interactions. There was a surprisingly early right parieto-occipital AV interaction, consistent with the finding of an earlier study [J. Cogn. Neurosci. 11 (1999) 473]. The timing of onset of this effect (46 ms) was essentially simultaneous with the onset of visual cortical processing, as indexed by the onset of the visual C1 component, which is thought to represent the earliest cortical visual evoked potential. The coincident timing of the early AV interaction and C1 strongly suggests that AV interactions can affect early visual sensory processing. Additional AV interactions were found within the time course of sensory processing (up to 200 ms post stimulus onset). In total, this system of AV effects over the scalp was suggestive of both activity unique to multisensory processing, and the modulation of 'unisensory' activity. RTs to the stimuli when presented simultaneously were significantly faster than when they were presented alone. This RT facilitation could not be accounted for by probability summation, as evidenced by violation of the 'race' model, providing compelling evidence that auditory-visual neural interactions give rise to this RT effect.  相似文献   

18.
The neural mechanisms of auditory distance perception, a function of great biological importance, are poorly understood. Where not overruled by conflicting factors such as echoes or visual input, sound intensity is perceived as conveying distance information. We recorded neuromagnetic responses to amplitude variations over both supratemporal planes, with and without auditory spatial simulations. In the absence of other cues for distance, including those provided by auditory virtual reality, amplitude changes elicited enhanced preattentive responses over the right temporal lobe, indicating hemispheric lateralization of the 'where' pathway in the human. Lesion studies in monkeys and humans have shown that the rostral part of the right superior temporal cortex contributes to spatial awareness in the visual domain. Our data indicate that the distance to a sound source is processed within the adjacent right auditory cortex, thus extending the recent model of a right-hemisphere temporal multisensory matrix that subserves the integration of space-related data across visual and auditory modalities.  相似文献   

19.
Objects on a collision course with an observer produce a specific pattern of optical expansion on the retina known as looming, which in theory exactly specifies the time‐to‐collision (TTC) of approaching objects. It was recently demonstrated that the affective content of looming stimuli influences perceived TTC, with threatening objects judged as approaching sooner than non‐threatening objects. Here, the neural mechanisms by which perceived threat modulates spatiotemporal perception were investigated. Participants judged the TTC of threatening (snakes, spiders) or non‐threatening (butterflies, rabbits) stimuli, which expanded in size at a rate indicating one of five TTCs. Visual‐evoked potentials (VEPs) and oscillatory neural responses measured with electroencephalography were analysed. The arrival time of threatening stimuli was underestimated compared with non‐threatening stimuli, though an interaction suggested that this underestimation was not constant across TTCs. Further, both speed of approach and threat modulated both VEPs and oscillatory responses. Speed of approach modulated the N1 parietal and oscillations in the beta band. Threat modulated several VEP components (P1, N1 frontal, N1 occipital, early posterior negativity and late positive potential) and oscillations in the alpha and high gamma band. The results for the high gamma band suggest an interaction between these two factors. Previous evidence suggests that looming stimuli activate sensorimotor areas, even in the absence of an intended action. The current results show that threat disrupts the synchronization over the sensorimotor areas that are likely activated by the presentation of a looming stimulus.  相似文献   

20.
Despite being of primary importance for fundamental research and clinical studies, the relationship between local neural population activity and scalp electroencephalography (EEG) in humans remains largely unknown. Here we report simultaneous scalp and intracerebral EEG responses to face stimuli in a unique epileptic patient implanted with 27 intracerebral recording contacts in the right occipitotemporal cortex. The patient was shown images of faces appearing at a frequency of 6 Hz, which elicits neural responses at this exact frequency. Response quantification at this frequency allowed to objectively relate the neural activity measured inside and outside the brain. The patient exhibited typical 6 Hz responses on the scalp at the right occipitotemporal sites. Moreover, there was a clear spatial correspondence between these scalp responses and intracerebral signals in the right lateral inferior occipital gyrus, both in amplitude and in phase. Nevertheless, the signal measured on the scalp and inside the brain at nearby locations showed a 10‐fold difference in amplitude due to electrical insulation from the head. To further quantify the relationship between the scalp and intracerebral recordings, we used an approach correlating time‐varying signals at the stimulation frequency across scalp and intracerebral channels. This analysis revealed a focused and right‐lateralized correspondence between the scalp and intracerebral recordings that were specific to the face stimulation is more broadly distributed in various control situations. These results demonstrate the interest of a frequency tagging approach in characterizing the electrical propagation from brain sources to scalp EEG sensors and in identifying the cortical sources of brain functions from these recordings.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号