首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到12条相似文献,搜索用时 0 毫秒
1.
It is commonly believed that the ability to integrate information from different senses develops according to associative learning principles as neurons acquire experience with co‐active cross‐modal inputs. However, previous studies have not distinguished between requirements for co‐activation versus co‐variation. To determine whether cross‐modal co‐activation is sufficient for this purpose in visual–auditory superior colliculus (SC) neurons, animals were reared in constant omnidirectional noise. By masking most spatiotemporally discrete auditory experiences, the noise created a sensory landscape that decoupled stimulus co‐activation and co‐variance. Although a near‐normal complement of visual–auditory SC neurons developed, the vast majority could not engage in multisensory integration, revealing that visual–auditory co‐activation was insufficient for this purpose. That experience with co‐varying stimuli is required for multisensory maturation is consistent with the role of the SC in detecting and locating biologically significant events, but it also seems likely that this is a general requirement for multisensory maturation throughout the brain.  相似文献   

2.
To make accurate perceptual estimates, observers must take the reliability of sensory information into account. Despite many behavioural studies showing that subjects weight individual sensory cues in proportion to their reliabilities, it is still unclear when during a trial neuronal responses are modulated by the reliability of sensory information or when they reflect the perceptual weights attributed to each sensory input. We investigated these questions using a combination of psychophysics, EEG‐based neuroimaging and single‐trial decoding. Our results show that the weighted integration of sensory information in the brain is a dynamic process; effects of sensory reliability on task‐relevant EEG components were evident 84 ms after stimulus onset, while neural correlates of perceptual weights emerged 120 ms after stimulus onset. These neural processes had different underlying sources, arising from sensory and parietal regions, respectively. Together these results reveal the temporal dynamics of perceptual and neural audio‐visual integration and support the notion of temporally early and functionally specific multisensory processes in the brain.  相似文献   

3.
Recent work using electroencephalography has applied stimulus reconstruction techniques to identify the attended speaker in a cocktail party environment. The success of these approaches has been primarily based on the ability to detect cortical tracking of the acoustic envelope at the scalp level. However, most studies have ignored the effects of visual input, which is almost always present in naturalistic scenarios. In this study, we investigated the effects of visual input on envelope‐based cocktail party decoding in two multisensory cocktail party situations: (a) Congruent AV—facing the attended speaker while ignoring another speaker represented by the audio‐only stream and (b) Incongruent AV (eavesdropping)—attending the audio‐only speaker while looking at the unattended speaker. We trained and tested decoders for each condition separately and found that we can successfully decode attention to congruent audiovisual speech and can also decode attention when listeners were eavesdropping, i.e., looking at the face of the unattended talker. In addition to this, we found alpha power to be a reliable measure of attention to the visual speech. Using parieto‐occipital alpha power, we found that we can distinguish whether subjects are attending or ignoring the speaker's face. Considering the practical applications of these methods, we demonstrate that with only six near‐ear electrodes we can successfully determine the attended speech. This work extends the current framework for decoding attention to speech to more naturalistic scenarios, and in doing so provides additional neural measures which may be incorporated to improve decoding accuracy.  相似文献   

4.
Prior studies have repeatedly reported behavioural benefits to events occurring at attended, compared to unattended, points in time. It has been suggested that, as for spatial orienting, temporal orienting of attention spreads across sensory modalities in a synergistic fashion. However, the consequences of cross‐modal temporal orienting of attention remain poorly understood. One challenge is that the passage of time leads to an increase in event predictability throughout a trial, thus making it difficult to interpret possible effects (or lack thereof). Here we used a design that avoids complete temporal predictability to investigate whether attending to a sensory modality (vision or touch) at a point in time confers beneficial access to events in the other, non‐attended, sensory modality (touch or vision, respectively). In contrast to previous studies and to what happens with spatial attention, we found that events in one (unattended) modality do not automatically benefit from happening at the time point when another modality is expected. Instead, it seems that attention can be deployed in time with relative independence for different sensory modalities. Based on these findings, we argue that temporal orienting of attention can be cross‐modally decoupled in order to flexibly react according to the environmental demands, and that the efficiency of this selective decoupling unfolds in time.  相似文献   

5.
The orienting of attention to the spatial location of sensory stimuli in one modality based on sensory stimuli presented in another modality (i.e., cross‐modal orienting) is a common mechanism for controlling attentional shifts. The neuronal mechanisms of top‐down cross‐modal orienting have been studied extensively. However, the neuronal substrates of bottom‐up audio‐visual cross‐modal spatial orienting remain to be elucidated. Therefore, behavioral and event‐related functional magnetic resonance imaging (FMRI) data were collected while healthy volunteers (N = 26) performed a spatial cross‐modal localization task modeled after the Posner cuing paradigm. Behavioral results indicated that although both visual and auditory cues were effective in producing bottom‐up shifts of cross‐modal spatial attention, reorienting effects were greater for the visual cues condition. Statistically significant evidence of inhibition of return was not observed for either condition. Functional results also indicated that visual cues with auditory targets resulted in greater activation within ventral and dorsal frontoparietal attention networks, visual and auditory “where” streams, primary auditory cortex, and thalamus during reorienting across both short and long stimulus onset asynchronys. In contrast, no areas of unique activation were associated with reorienting following auditory cues with visual targets. In summary, current results question whether audio‐visual cross‐modal orienting is supramodal in nature, suggesting rather that the initial modality of cue presentation heavily influences both behavioral and functional results. In the context of localization tasks, reorienting effects accompanied by the activation of the frontoparietal reorienting network are more robust for visual cues with auditory targets than for auditory cues with visual targets. Hum Brain Mapp 35:964–974, 2014. © 2013 Wiley Periodicals, Inc.  相似文献   

6.
Human activities often involve hand‐motor responses following external auditory–verbal commands. It has been believed that hand movements are predominantly driven by the contralateral primary sensorimotor cortex, whereas auditory–verbal information is processed in both superior temporal gyri. It remains unknown whether cortical activation in the superior temporal gyrus during an auditory–motor task is affected by laterality of hand‐motor responses. Here, event‐related γ‐oscillations were intracranially recorded as quantitative measures of cortical activation; we determined how cortical structures were activated by auditory‐cued movement using each hand in 15 patients with focal epilepsy. Auditory–verbal stimuli elicited augmentation of γ‐oscillations in a posterior portion of the superior temporal gyrus, whereas hand‐motor responses elicited γ‐augmentation in the pre‐ and postcentral gyri. The magnitudes of such γ‐augmentation in the superior temporal, precentral, and postcentral gyri were significantly larger when the hand contralateral to the recorded hemisphere was required to be used for motor responses, compared with when the ipsilateral hand was. The superior temporal gyrus in each hemisphere might play a greater pivotal role when the contralateral hand needs to be used for motor responses, compared with when the ipsilateral hand does. Hum Brain Mapp, 2010. © 2010 Wiley‐Liss, Inc.  相似文献   

7.
Recognizing the identity of other individuals across different sensory modalities is critical for successful social interaction. In the human brain, face‐ and voice‐sensitive areas are separate, but structurally connected. What kind of information is exchanged between these specialized areas during cross‐modal recognition of other individuals is currently unclear. For faces, specific areas are sensitive to identity and to physical properties. It is an open question whether voices activate representations of face identity or physical facial properties in these areas. To address this question, we used functional magnetic resonance imaging in humans and a voice‐face priming design. In this design, familiar voices were followed by morphed faces that matched or mismatched with respect to identity or physical properties. The results showed that responses in face‐sensitive regions were modulated when face identity or physical properties did not match to the preceding voice. The strength of this mismatch signal depended on the level of certainty the participant had about the voice identity. This suggests that both identity and physical property information was provided by the voice to face areas. The activity and connectivity profiles differed between face‐sensitive areas: (i) the occipital face area seemed to receive information about both physical properties and identity, (ii) the fusiform face area seemed to receive identity, and (iii) the anterior temporal lobe seemed to receive predominantly identity information from the voice. We interpret these results within a prediction coding scheme in which both identity and physical property information is used across sensory modalities to recognize individuals. Hum Brain Mapp, 36:324–339, 2015. © 2014 Wiley Periodicals, Inc .  相似文献   

8.
The orphan receptor, GPR88, is emerging as a key player in the pathophysiology of several neuropsychiatric diseases, including psychotic disorders. Knockout (KO) mice lacking GPR88 throughout the brain exhibit many abnormalities relevant to schizophrenia including locomotor hyperactivity, behavioural hypersensitivity to dopaminergic psychostimulants and deficient sensorimotor gating. Here, we used conditional knockout (cKO) mice lacking GPR88 selectively in striatal medium spiny neurons expressing A2A receptor to determine neuronal circuits underlying these phenotypes. We first studied locomotor responses of A2AR‐Gpr88 KO mice and their control littermates to psychotomimetic, amphetamine, and to selective D1 and D2 receptor agonists, SKF‐81297 and quinpirole, respectively. To assess sensorimotor gating performance, mice were submitted to acoustic and visual prepulse inhibition (PPI) paradigms. Total knockout GPR88 mice were also studied for comparison. Like total GPR88 KO mice, A2AR‐Gpr88 KO mice displayed a heightened sensitivity to locomotor stimulant effects of amphetamine and SKF‐81297. They also exhibited enhanced locomotor activity to quinpirole, which tended to suppress locomotion in control mice. By contrast, they had normal acoustic and visual PPI, unlike total GPR88 KO mice that show impairments across different sensory modalities. Finally, none of the genetic manipulations altered central auditory temporal processing assessed by gap‐PPI. Together, these findings support the role of GPR88 in the pathophysiology of schizophrenia and show that GPR88 in A2A receptor‐expressing neurons modulates psychomotor behaviour but not sensorimotor gating.  相似文献   

9.
Sensory axons are targeted to modality-specific nuclei in the thalamus. Retinal ganglion cell axons project retinotopically to their principal thalamic target, the dorsal lateral geniculate nucleus (LGd), in a pattern likely dictated by the expression of molecular gradients in the LGd. Deafferenting the auditory thalamus induces retinal axons to innervate the medial geniculate nucleus (MGN). These retino-MGN projections also show retinotopic organization. Here we show that ephrin-A2 and -A5, which are expressed in similar gradients in the MGN and LGd, can be used to pattern novel retinal projections in the MGN. As in the LGd, retinal axons from each eye terminate in discrete eye-specific zones in the MGN of rewired wild-type and ephrin-A2/A5 knockout mice. However, ipsilateral eye axons, which arise from retinal regions of high EphA5 receptor expression and represent central visual field, terminate in markedly different ways in the two mice. In rewired wild-type mice, ipsilateral axons specifically avoid areas of high ephrin expression in the MGN. In rewired ephrin knockout mice, ipsilateral projections shift in location and spread more broadly, leading to an expanded representation of the ipsilateral eye in the MGN. Similarly, ipsilateral projections to the LGd in ephrin knockout mice are shifted and are more widespread than in the LGd of wild-type mice. In the MGN, as in the LGd, terminations from the two eyes show little overlap even in the knockout mice, suggesting that local interocular segregation occurs regardless of other patterning determinants. Our data demonstrate that graded topographic labels, such as the ephrins, can serve to shape multiple related aspects of afferent patterning, including topographic mapping and the extent and spread of eye-specific projections. Furthermore, when mapping labels and other cues are expressed in multiple target zones, novel projections are patterned according to rules that operate in their canonical targets.  相似文献   

10.
11.
The voluntary allocation of attention to environmental inputs is a crucial mechanism of healthy cognitive functioning, and is probably influenced by an observer's level of interest in a stimulus. For example, an individual who is passionate about soccer but bored by botany will obviously be more attentive at a soccer match than an orchid show. The influence of monetary rewards on attention has been examined, but the impact of more common motivating factors (i.e. the level of interest in the materials under observation) remains unclear, especially during development. Here, stimulus sets were designed based on survey measures of the level of interest of adolescent participants in several item classes. High‐density electroencephalography was recorded during a cued spatial attention task in which stimuli of high or low interest were presented in separate blocks. The motivational impact on performance of a spatial attention task was assessed, along with event‐related potential measures of anticipatory top‐down attention. As predicted, performance was improved for the spatial target detection of high interest items. Further, the impact of motivation was observed in parieto‐occipital processes associated with anticipatory top‐down spatial attention. The anticipatory activity over these regions was also increased for high vs. low interest stimuli, irrespective of the direction of spatial attention. The results also showed stronger anticipatory attentional and motivational modulations over the right vs. left parieto‐occipital cortex. These data suggest that motivation enhances top‐down attentional processes, and can independently shape activations in sensory regions in anticipation of events. They also suggest that attentional functions across hemispheres may not fully mature until late adolescence.  相似文献   

12.
Ascending projections of the dorsal cochlear nucleus (DCN) target primarily the contralateral inferior colliculus (IC). In turn, the IC sends bilateral descending projections back to the DCN. We sought to determine the nature of these descending axons in order to infer circuit mechanisms of signal processing at one of the earliest stages of the central auditory pathway. An anterograde tracer was injected in the IC of CBA/Ca mice to reveal terminal characteristics of the descending axons. Retrograde tracer deposits were made in the DCN of CBA/Ca and transgenic GAD67–EGFP mice to investigate the cells giving rise to these projections. A multiunit best frequency was determined for each injection site. Brains were processed by using standard histologic methods for visualization and examined by fluorescent, brightfield, and electron microscopy. Descending projections from the IC were inferred to be excitatory because the cell bodies of retrogradely labeled neurons did not colabel with EGFP expression in neurons of GAD67–EGFP mice. Furthermore, additional experiments yielded no glycinergic or cholinergic positive cells in the IC, and descending projections to the DCN were colabeled with antibodies against VGluT2, a glutamate transporter. Anterogradely labeled endings in the DCN formed asymmetric postsynaptic densities, a feature of excitatory neurotransmission. These descending projections to the DCN from the IC were topographic and suggest a feedback pathway that could underlie a frequency‐specific enhancement of some acoustic signals and suppression of others. The involvement of this IC–DCN circuit is especially noteworthy when considering the gating of ascending signal streams for auditory processing. J. Comp. Neurol. 525:773–793, 2017. © 2016 Wiley Periodicals, Inc.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号