首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Foxe JJ  Schroeder CE 《Neuroreport》2005,16(5):419-423
The prevailing hierarchical model of sensory processing in the brain holds that different modalities of sensory information emanating from a single object are analyzed extensively during passage through their respective unisensory processing streams before they are combined in higher-order 'multisensory' regions of the cortex. Because of this view, multisensory interactions that have been found at early, putatively 'unisensory' cortical processing stages during hemodynamic imaging studies have been assumed to reflect feedback modulations that occur subsequent to multisensory processing in the higher-order multisensory areas. In this paper, we consider findings that challenge an exclusively feedback interpretation of early multisensory integration effects. First, high-density electrical mapping studies in humans have shown that multisensory convergence and integration effects can occur so early in the time course of sensory processing that purely feedback mediation becomes extremely unlikely. Second, direct neural recordings in monkeys show that, in some cases, convergent inputs at early cortical stages have physiological profiles characteristic of feedforward rather than feedback inputs. Third, damage to higher-order integrative regions in humans often spares the ability to integrate across sensory modalities. Finally, recent anatomic tracer studies have reported direct anatomical connections between primary visual and auditory cortex. These findings make it clear that multisensory convergence at early stages of cortical processing results from feedforward as well as feedback and lateral connections, thus using the full range of anatomical connections available in brain circuitry.  相似文献   

2.
For children with autism spectrum disorders (ASDs), social robots are increasingly utilized as therapeutic tools in order to enhance social skills and communication. Robots have been shown to generate a number of social and behavioral benefits in children with ASD including heightened engagement, increased attention, and decreased social anxiety. Although social robots appear to be effective social reinforcement tools in assistive therapies, the perceptual mechanism underlying these benefits remains unknown. To date, social robot studies have primarily relied on expertise in fields such as engineering and clinical psychology, with measures of social robot efficacy principally limited to qualitative observational assessments of children's interactions with robots. In this review, we examine a range of socially interactive robots that currently have the most widespread use as well as the utility of these robots and their therapeutic effects. In addition, given that social interactions rely on audiovisual communication, we discuss how enhanced sensory processing and integration of robotic social cues may underlie the perceptual and behavioral benefits that social robots confer. Although overall multisensory processing (including audiovisual integration) is impaired in individuals with ASD, social robot interactions may provide therapeutic benefits by allowing audiovisual social cues to be experienced through a simplified version of a human interaction. By applying systems neuroscience tools to identify, analyze, and extend the multisensory perceptual substrates that may underlie the therapeutic benefits of social robots, future studies have the potential to strengthen the clinical utility of social robots for individuals with ASD.  相似文献   

3.
Picture yourself on a crowded sideway with people milling about. The acoustic and visual signals generated by the crowd provide you with complementary information about their locations and motion which needs to be integrated. It is not well understood how such inputs from different sensory channels are combined into unified perceptual states. Coherence of oscillatory neural signals might be an essential mechanism supporting multisensory perception. Evidence is now emerging which indicates that coupled oscillatory activity might serve to link neural signals across uni- and multisensory regions and to express the degree of crossmodal matching of stimulus-related information. These results argue for a new view on multisensory processing which considers the dynamic interplay of neural populations as a key to crossmodal integration.  相似文献   

4.
Multiple sensory afferents to ferret pseudosylvian sulcal cortex   总被引:4,自引:0,他引:4  
Ramsay AM  Meredith MA 《Neuroreport》2004,15(3):461-465
While the ferret cerebral cortex is being used with increasing frequency in studies of neural processing and development, little is known regarding the organization of its associational sensory and multisensory regions. Therefore, the present investigation used neuroanatomical methods to identify non-primary visual and somatosensory representations and their potential for multisensory convergence. Tracer injections made into V1 or SI cortex labeled axon terminals within the pseudosylvian sulcal cortex (PSSC). These inputs were distributed according to modality, with visual inputs identified in the lateral aspects of the posterior dorsal bank, and somatosensory inputs found anterior along the dorsal bank, fundus and ventral bank. Somatosensory inputs showed a topographic arrangement, with inputs representing face found more anteriorly than those representing trunk regions. Overlap between these different sensory projections occurred posteriorly in the PSSC and may represent a zone of multisensory convergence. These data are consistent with the presence of associational visual, somatosensory, and multisensory areas within the PSSC.  相似文献   

5.
Incoming signals from different sensory modalities are initially processed in separate brain regions. But because these different signals can arise from common events or objects in the external world, integration between them can be useful. Such integration is subject to spatial and temporal constraints, presumably because a common source is more likely for information arising from around the same place and time. This review focuses on recent neuroimaging data concerning spatial aspects of multisensory integration in the human brain. These findings indicate not only that multisensory integration involves anatomical convergence from sensory-specific ('unimodal') cortices into multisensory ('heteromodal') brain areas, but also that multisensory spatial interactions can affect even so-called 'unimodal' brain regions. Such findings call for a revision of traditional assumptions about multisensory processing in the brain.  相似文献   

6.
The anatomical organization of the brain is such that incoming signals from different sensory modalities are initially processed in anatomically separate regions of the cortex. When these signals originate from a single event or object in the external world, it is essential that the inputs are integrated to form a coherent representation of the multisensory event. This review discusses recent data indicating that the integration of multisensory signals relies not only on anatomical convergence from sensory-specific cortices to multi-sensory brain areas but also on reciprocal influences between cortical regions that are traditionally considered as sensory-specific. These findings highlight integration mechanisms that go beyond traditional models based on a hierarchical convergence of sensory processing.  相似文献   

7.
Edges are crucial for the formation of coherent objects from sequential sensory inputs within a single modality. Moreover, temporally coincident boundaries of perceptual objects across different sensory modalities facilitate crossmodal integration. Here, we used functional magnetic resonance imaging in order to examine the neural basis of temporal edge detection across modalities. Onsets of sensory inputs are not only related to the detection of an edge but also to the processing of novel sensory inputs. Thus, we used transitions from input to rest (offsets) as convenient stimuli for studying the neural underpinnings of visual and acoustic edge detection per se. We found, besides modality-specific patterns, shared visual and auditory offset-related activity in the superior temporal sulcus and insula of the right hemisphere. Our data suggest that right hemispheric regions known to be involved in multisensory processing are crucial for detection of edges in the temporal domain across both visual and auditory modalities. This operation is likely to facilitate cross-modal object feature binding based on temporal coincidence.  相似文献   

8.
The transformation of sensory signals as they pass through cortical circuits has been revealed almost exclusively through studies of the primary sensory cortices, for which principles of laminar organization, local connectivity, and parallel processing have been elucidated. In contrast, almost nothing is known about the circuitry or laminar features of multisensory processing in higher order, multisensory cortex. Therefore, using the ferret higher order multisensory rostral posterior parietal (PPr) cortex, the present investigation employed a combination of multichannel recording and neuroanatomical techniques to elucidate the laminar basis of multisensory cortical processing. The proportion of multisensory neurons, the share of neurons showing multisensory integration, and the magnitude of multisensory integration were all found to differ by layer in a way that matched the functional or connectional characteristics of the PPr. Specifically, the supragranular layers (L2/3) demonstrated among the highest proportions of multisensory neurons and the highest incidence of multisensory response enhancement, while also receiving the highest levels of extrinsic inputs, exhibiting the highest dendritic spine densities, and providing a major source of local connectivity. In contrast, layer 6 showed the highest proportion of unisensory neurons while receiving the fewest external and local projections and exhibiting the lowest dendritic spine densities. Coupled with a lack of input from principal thalamic nuclei and a minimal layer 4, these observations indicate that this higher level multisensory cortex shows functional and organizational modifications from the well‐known patterns identified for primary sensory cortical regions. J. Comp. Neurol. 521:1867–1890, 2013. © 2012 Wiley Periodicals, Inc.  相似文献   

9.
Now that examples of multisensory neurons have been observed across the neocortex, this has led to some confusion about the features that actually designate a region as “multisensory.” While the documentation of multisensory effects within many different cortical areas is clear, often little information is available about their proportions or net functional effects. To assess the compositional and functional features that contribute to the multisensory nature of a region, the present investigation used multichannel neuronal recording and tract tracing methods to examine the ferret temporal region: the lateral rostral suprasylvian sulcal area. Here, auditory-tactile multisensory neurons were predominant and constituted the majority of neurons across all cortical layers whose responses dominated the net spiking activity of the area. These results were then compared with a literature review of cortical multisensory data and were found to closely resemble multisensory features of other, higher-order sensory areas. Collectively, these observations argue that multisensory processing presents itself in hierarchical and area-specific ways, from regions that exhibit few multisensory features to those whose composition and processes are dominated by multisensory activity. It seems logical that the former exhibit some multisensory features (among many others), while the latter are legitimately designated as “multisensory.”  相似文献   

10.
Most perceptual decisions rely on the active acquisition of evidence from the environment involving stimulation from multiple senses. However, our understanding of the neural mechanisms underlying this process is limited. Crucially, it remains elusive how different sensory representations interact in the formation of perceptual decisions. To answer these questions, we used an active sensing paradigm coupled with neuroimaging, multivariate analysis, and computational modeling to probe how the human brain processes multisensory information to make perceptual judgments. Participants of both sexes actively sensed to discriminate two texture stimuli using visual (V) or haptic (H) information or the two sensory cues together (VH). Crucially, information acquisition was under the participants'' control, who could choose where to sample information from and for how long on each trial. To understand the neural underpinnings of this process, we first characterized where and when active sensory experience (movement patterns) is encoded in human brain activity (EEG) in the three sensory conditions. Then, to offer a neurocomputational account of active multisensory decision formation, we used these neural representations of active sensing to inform a drift diffusion model of decision-making behavior. This revealed a multisensory enhancement of the neural representation of active sensing, which led to faster and more accurate multisensory decisions. We then dissected the interactions between the V, H, and VH representations using a novel information-theoretic methodology. Ultimately, we identified a synergistic neural interaction between the two unisensory (V, H) representations over contralateral somatosensory and motor locations that predicted multisensory (VH) decision-making performance.SIGNIFICANCE STATEMENT In real-world settings, perceptual decisions are made during active behaviors, such as crossing the road on a rainy night, and include information from different senses (e.g., car lights, slippery ground). Critically, it remains largely unknown how sensory evidence is combined and translated into perceptual decisions in such active scenarios. Here we address this knowledge gap. First, we show that the simultaneous exploration of information across senses (multi-sensing) enhances the neural encoding of active sensing movements. Second, the neural representation of active sensing modulates the evidence available for decision; and importantly, multi-sensing yields faster evidence accumulation. Finally, we identify a cross-modal interaction in the human brain that correlates with multisensory performance, constituting a putative neural mechanism for forging active multisensory perception.  相似文献   

11.
In real-world settings, information from multiple sensory modalities is combined to form a complete, behaviorally salient percept - a process known as multisensory integration. While deficits in auditory and visual processing are often observed in schizophrenia, little is known about how multisensory integration is affected by the disorder. The present study examined auditory, visual, and combined audio-visual processing in schizophrenia patients using high-density electrical mapping. An ecologically relevant task was used to compare unisensory and multisensory evoked potentials from schizophrenia patients to potentials from healthy normal volunteers. Analysis of unisensory responses revealed a large decrease in the N100 component of the auditory-evoked potential, as well as early differences in the visual-evoked components in the schizophrenia group. Differences in early evoked responses to multisensory stimuli were also detected. Multisensory facilitation was assessed by comparing the sum of auditory and visual evoked responses to the audio-visual evoked response. Schizophrenia patients showed a significantly greater absolute magnitude response to audio-visual stimuli than to summed unisensory stimuli when compared to healthy volunteers, indicating significantly greater multisensory facilitation in the patient group. Behavioral responses also indicated increased facilitation from multisensory stimuli. The results represent the first report of increased multisensory facilitation in schizophrenia and suggest that, although unisensory deficits are present, compensatory mechanisms may exist under certain conditions that permit improved multisensory integration in individuals afflicted with the disorder.  相似文献   

12.
At any given moment our sensory systems receive multiple, often rhythmic, inputs from the environment. Processing of temporally structured events in one sensory modality can guide both behavioral and neural processing of events in other sensory modalities, but whether this occurs remains unclear. Here, we used human electroencephalography (EEG) to test the cross-modal influences of a continuous auditory frequency-modulated (FM) sound on visual perception and visual cortical activity. We report systematic fluctuations in perceptual discrimination of brief visual stimuli in line with the phase of the FM-sound. We further show that this rhythmic modulation in visual perception is related to an accompanying rhythmic modulation of neural activity recorded over visual areas. Importantly, in our task, perceptual and neural visual modulations occurred without any abrupt and salient onsets in the energy of the auditory stimulation and without any rhythmic structure in the visual stimulus. As such, the results provide a critical validation for the existence and functional role of cross-modal entrainment and demonstrates its utility for organizing the perception of multisensory stimulation in the natural environment.SIGNIFICANCE STATEMENT Our sensory environment is filled with rhythmic structures that are often multi-sensory in nature. Here, we show that the alignment of neural activity to the phase of an auditory frequency-modulated (FM) sound has cross-modal consequences for vision: yielding systematic fluctuations in perceptual discrimination of brief visual stimuli that are mediated by accompanying rhythmic modulation of neural activity recorded over visual areas. These cross-modal effects on visual neural activity and perception occurred without any abrupt and salient onsets in the energy of the auditory stimulation and without any rhythmic structure in the visual stimulus. The current work shows that continuous auditory fluctuations in the natural environment can provide a pacing signal for neural activity and perception across the senses.  相似文献   

13.
Older adults are known to gain more than younger adults from the simultaneous presentation of semantically congruent sensory stimuli. Although these findings are quite exciting, they may not solely be due to age-related differences in multisensory processing. Rather, enhanced integration may be explained by alterations associated with general cognitive slowing. This study utilized a task that eliminated most high-order cognitive processing. As such, no significant differences in unisensory response times were seen; however, older adults actually showed faster multisensory responses than younger adults. Older adults continued to show significantly greater multisensory enhancement than younger adults. Data support the conclusion that differences in multisensory processing for older adults cannot be explained solely by the effects of general cognitive slowing.  相似文献   

14.
To make accurate perceptual estimates, observers must take the reliability of sensory information into account. Despite many behavioural studies showing that subjects weight individual sensory cues in proportion to their reliabilities, it is still unclear when during a trial neuronal responses are modulated by the reliability of sensory information or when they reflect the perceptual weights attributed to each sensory input. We investigated these questions using a combination of psychophysics, EEG‐based neuroimaging and single‐trial decoding. Our results show that the weighted integration of sensory information in the brain is a dynamic process; effects of sensory reliability on task‐relevant EEG components were evident 84 ms after stimulus onset, while neural correlates of perceptual weights emerged 120 ms after stimulus onset. These neural processes had different underlying sources, arising from sensory and parietal regions, respectively. Together these results reveal the temporal dynamics of perceptual and neural audio‐visual integration and support the notion of temporally early and functionally specific multisensory processes in the brain.  相似文献   

15.
Two fundamental requirements for multisensory integration are convergence of unisensory (e.g. visual and auditory) inputs and temporal alignment of the neural responses to convergent inputs. We investigated the anatomic mechanisms of multisensory convergence by examining three areas in which convergence occurs, posterior auditory association cortex, superior temporal polysensory area (STP) and ventral intraparietal sulcus area (VIP). The first of these was recently shown to be a site of multisensory convergence and the latter two are more well known as 'classic' multisensory regions. In each case, we focused on defining the laminar profile of response to the unisensory inputs. This information is useful because two major types of connection, feedforward and feedback, have characteristic differences in laminar termination patterns, which manifest physiologically. In the same multisensory convergence areas we also examined the timing of the unisensory inputs using the same standardized stimuli across all recordings. Our findings indicate that: (1) like somatosensory input [J. Neurophysiol., 85 (2001) 1322], visual input is available at very early stages of auditory processing, (2) convergence occurs through feedback, as well as feedforward anatomical projections and (3) input timing may be an asset, as well as a constraint in multisensory processing.  相似文献   

16.
Recently, experimental and theoretical research has focused on the brain's abilities to extract information from a noisy sensory environment and how cross‐modal inputs are processed to solve the causal inference problem to provide the best estimate of external events. Despite the empirical evidence suggesting that the nervous system uses a statistically optimal and probabilistic approach in addressing these problems, little is known about the brain's architecture needed to implement these computations. The aim of this work was to realize a mathematical model, based on physiologically plausible hypotheses, to analyze the neural mechanisms underlying multisensory perception and causal inference. The model consists of three layers topologically organized: two encode auditory and visual stimuli, separately, and are reciprocally connected via excitatory synapses and send excitatory connections to the third downstream layer. This synaptic organization realizes two mechanisms of cross‐modal interactions: the first is responsible for the sensory representation of the external stimuli, while the second solves the causal inference problem. We tested the network by comparing its results to behavioral data reported in the literature. Among others, the network can account for the ventriloquism illusion, the pattern of sensory bias and the percept of unity as a function of the spatial auditory–visual distance, and the dependence of the auditory error on the causal inference. Finally, simulations results are consistent with probability matching as the perceptual strategy used in auditory–visual spatial localization tasks, agreeing with the behavioral data. The model makes untested predictions that can be investigated in future behavioral experiments.  相似文献   

17.
How human beings integrate information from external sources and internal cognition to produce a coherent experience is still not well understood. During the past decades, anatomical, neurophysiological and neuroimaging research in multimodal integration have stood out in the effort to understand the perceptual binding properties of the brain. Areas in the human lateral occipitotemporal, prefrontal, and posterior parietal cortices have been associated with sensory multimodal processing. Even though this, rather patchy, organization of brain regions gives us a glimpse of the perceptual convergence, the articulation of the flow of information from modality-related to the more parallel cognitive processing systems remains elusive. Using a method called stepwise functional connectivity analysis, the present study analyzes the functional connectome and transitions from primary sensory cortices to higher-order brain systems. We identify the large-scale multimodal integration network and essential connectivity axes for perceptual integration in the human brain.  相似文献   

18.
The synchronous occurrence of the unisensory components of a multisensory stimulus contributes to their successful merging into a coherent perceptual representation. Oscillatory gamma-band responses (GBRs, 30-80 Hz) have been linked to feature integration mechanisms and to multisensory processing, suggesting they may also be sensitive to the temporal alignment of multisensory stimulus components. Here we examined the effects on early oscillatory GBR brain activity of varying the precision of the temporal synchrony of the unisensory components of an audio-visual stimulus. Audio-visual stimuli were presented with stimulus onset asynchronies ranging from -125 to +125 ms. Randomized streams of auditory (A), visual (V), and audio-visual (AV) stimuli were presented centrally while subjects attended to either the auditory or visual modality to detect occasional targets. GBRs to auditory and visual components of multisensory AV stimuli were extracted for five subranges of asynchrony (e.g., A preceded by V by 100+/-25 ms, by 50+/-25 ms, etc.) and compared with GBRs to unisensory control stimuli. Robust multisensory interactions were observed in the early GBRs when the auditory and visual stimuli were presented with the closest synchrony. These effects were found over medial-frontal brain areas after 30-80 ms and over occipital brain areas after 60-120 ms. A second integration effect, possibly reflecting the perceptual separation of the two sensory inputs, was found over occipital areas when auditory inputs preceded visual by 100+/-25 ms. No significant interactions were observed for the other subranges of asynchrony. These results show that the precision of temporal synchrony can have an impact on early cross-modal interactions in human cortex.  相似文献   

19.
Cannabinoids induce a host of perceptual alterations and cognitive deficits in humans. However, the neural correlates of these deficits have remained elusive. The current study examined the acute, dose-related effects of delta-9-tetrahydrocannabinol (Δ?-THC) on psychophysiological indices of information processing in humans. Healthy subjects (n=26) completed three test days during which they received intravenous Δ?-THC (placebo, 0.015 and 0.03?mg/kg) in a within-subject, double-blind, randomized, cross-over, and counterbalanced design. Psychophysiological data (electroencephalography) were collected before and after drug administration while subjects engaged in an event-related potential (ERP) task known to be a valid index of attention and cognition (a three-stimulus auditory 'oddball' P300 task). Δ?-THC dose-dependently reduced the amplitude of both the target P300b and the novelty P300a. Δ?-THC did not have any effect on the latency of either the P300a or P300b, or on early sensory-evoked ERP components preceding the P300 (the N100). Concomitantly, Δ?-THC induced psychotomimetic effects, perceptual alterations, and subjective 'high' in a dose-dependent manner. Δ?-THC -induced reductions in P3b amplitude correlated with Δ?-THC-induced perceptual alterations. Lastly, exploratory analyses examining cannabis use status showed that whereas recent cannabis users had blunted behavioral effects to Δ(9)-THC, there were no dose-related effects of Δ?-THC on P300a/b amplitude between cannabis-free and recent cannabis users. Overall, these data suggest that at doses that produce behavioral and subjective effects consistent with the known properties of cannabis, Δ?-THC reduced P300a and P300b amplitudes without altering the latency of these ERPs. Cannabinoid agonists may therefore disrupt cortical processes responsible for context updating and the automatic orientation of attention, while leaving processing speed and earlier sensory ERP components intact. Collectively, the findings suggest that CB1R systems modulate top-down and bottom-up processing.  相似文献   

20.
Acoustic speech is easier to detect in noise when the talker can be seen. This finding could be explained by integration of multisensory inputs or refinement of auditory processing from visual guidance. In two experiments, we studied two‐interval forced‐choice detection of an auditory ‘ba’ in acoustic noise, paired with various visual and tactile stimuli that were identically presented in the two observation intervals. Detection thresholds were reduced under the multisensory conditions vs. the auditory‐only condition, even though the visual and/or tactile stimuli alone could not inform the correct response. Results were analysed relative to an ideal observer for which intrinsic (internal) noise and efficiency were independent contributors to detection sensitivity. Across experiments, intrinsic noise was unaffected by the multisensory stimuli, arguing against the merging (integrating) of multisensory inputs into a unitary speech signal, but sampling efficiency was increased to varying degrees, supporting refinement of knowledge about the auditory stimulus. The steepness of the psychometric functions decreased with increasing sampling efficiency, suggesting that the ‘task‐irrelevant’ visual and tactile stimuli reduced uncertainty about the acoustic signal. Visible speech was not superior for enhancing auditory speech detection. Our results reject multisensory neuronal integration and speech‐specific neural processing as explanations for the enhanced auditory speech detection under noisy conditions. Instead, they support a more rudimentary form of multisensory interaction: the otherwise task‐irrelevant sensory systems inform the auditory system about when to listen.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号