首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
When presented with auditory, visual, or bimodal audiovisual stimuli in a speeded detection/discrimination task, participants fail to respond to the auditory component of the bimodal targets significantly more often than they fail to respond to the visual component. Signal detection theory (SDT) was used to explore the contributions of perceptual (sensitivity shifts) and decisional (shifts in response criteria) factors to this effect, known as the Colavita visual dominance effect. Participants performed a version of the Colavita task that had been modified to allow for SDT analyses. The participants had to detect auditory and visual targets (presented unimodally or bimodally) at their individually determined 75% detection thresholds. The results showed a significant decrease in participants’ sensitivity to auditory stimuli when presented concurrently with visual stimuli (in the absence of any significant change in their response criterion), suggesting that Colavita visual dominance does not simply reflect a decisional effect, but can be explained, at least in part, as a truly perceptual phenomenon. The decrease in sensitivity (to auditory stimuli) may be attributable to the exogenous capture of participants’ attention by the visual component of the bimodal target, thus leaving fewer attentional resources for the processing of the auditory stimulus. The reduction in auditory sensitivity reported here may be considered an example of crossmodal masking.  相似文献   

2.
Previous research has shown that people with one eye have enhanced spatial vision implying intra-modal compensation for their loss of binocularity. The current experiments investigate whether monocular blindness from unilateral eye enucleation may lead to cross-modal sensory compensation for the loss of one eye. We measured speeded detection and discrimination of audiovisual targets presented as a stream of paired objects and familiar sounds in a group of individuals with monocular enucleation compared to controls viewing binocularly or monocularly. In Experiment 1, participants detected the presence of auditory, visual or audiovisual targets. All participant groups were equally able to detect the targets. In Experiment 2, participants discriminated between the visual, auditory or bimodal (audiovisual) targets. Both control groups showed the Colavita effect, that is, preferential processing of visual over auditory information for the bimodal stimuli. The monocular enucleation group, however, showed no Colavita effect, and further, they demonstrated equal processing of visual and auditory stimuli. This finding suggests a lack of visual dominance and equivalent auditory and visual processing in people with one eye. This may be an adaptive form of sensory compensation for the loss of one eye and could result from recruitment of deafferented visual cortical areas by inputs from other senses.  相似文献   

3.
The effects of postoperative visual and auditory training on a brightness discrimination task were examined after lesions of various structures in the visual system. In Experiment 1, rats were trained to avoid shock with visual intensity cues. Twenty-four hours later, each rat received bilateral lesions in one of the following areas of the visual system: (1) sham, (2) visual cortex (VC), (3) pretectal (PT) area, (4) combined PT/VC, (5) superior colliculus (SC), or (6) combined SC/VC. Six days later, each rat received either training with visual or auditory intensity cues, or no training. The next day all rats were retrained on the preoperative visual avoidance task. All lesions except those in the SC condition produced relearning deficits. Auditory training reduced these deficits significantly more than visual training, except in rats with combined SC/VC lesions. In Experiment 2, sham and combined PT/VC lesion rats were given either direct or reversal intensity training using visual or auditory cues before relearning the visual discrimination. Rats given auditory direct training relearned the task faster than rats given reversal training or visual direct training. Postinjury training with an intact sensory system can enhance functional recovery more effectively than training with the damaged system. The differential effects of direct and reversal training suggest that cross-modal training involves both specific and nonspecific transfer that may be mediated through the VC or the SC.  相似文献   

4.
Rodents are useful animal models in the study of the molecular and cellular mechanisms underlying various neural functions. For studying behavioral properties associated with multisensory functions in rats, we measured the speed and accuracy of target detection by the reaction-time procedure. In the first experiment, we utilized simple two-alternative-choice tasks, in which spatial cues are visual or auditory modalities, and conducted a cross-modal transfer test in order to determine whether rats recognize amodal spatial information. Rats showed successful performance in the cross-modal transfer test and the speed to respond to sensory stimuli was constant under a rule-consistent condition despite the change in cue modality. In the second experiment, we developed audiovisual two-alternative-choice tasks, in which both auditory and visual stimuli were simultaneously presented but one of the two modalities was task-relevant, in order to determine whether the response to the sensory stimulation of one modality is enhanced by the stimulation of a different modality. If bimodal stimuli were spatially coincident, the speed for detecting the relevant stimulus was shortened and the extent of the effect was comparable to those in past studies of humans and other mammals. These results indicate the cross-modal spatial abilities of rats and our present paradigms may provide useful behavioral tasks for studying the neural bases of multisensory processing and integration in rats.  相似文献   

5.
Functional MRI (fMRI) combined with the paired-stimuli paradigms (referred as dynamic fMRI) was used to study the “illusory double-flash” effect on brain activity in the human visual cortex. Three experiments were designed. The first two experiments aimed to examine the cross-modal neural interaction between the visual and auditory sensory systems caused by the illusory double-flash effect using combined auditory (beep sound) and visual (light flash) stimuli. The fMRI signal in the visual cortex was significantly increased in response to the illusory double flashes compared to the physical single flash when the inter-stimuli delay between the auditory and visual stimuli was 25 ms. This increase disappeared when the delay was prolonged to ~300 ms. These results reveal that the illusory double-flash effect can significantly affect the brain activity in the visual cortex, and the degree of this effect is dynamically sensitive to the inter-stimuli delay. The third experiment was to address the spatial differentiation of brain activation in the visual cortex in response to the illusory double-flash stimulation. It was found that the illusory double-flash effect in the human visual cortex is much stronger in the periphery than the fovea. This finding suggests that the periphery may be involved in high-level brain processing beyond the retinotopic visual perception. The behavioral measures conducted in this study indicate an excellent correlation between the fMRI results and behavioral performance. Finally, this work demonstrates a unique merit of fMRI for providing both temporal and spatial information regarding cross-modal neural interaction between different sensory systems.  相似文献   

6.
When the brain is deprived of input from one sensory modality, it often compensates with supranormal performance in one or more of the intact sensory systems. In the absence of acoustic input, it has been proposed that cross-modal reorganization of deaf auditory cortex may provide the neural substrate mediating compensatory visual function. We tested this hypothesis using a battery of visual psychophysical tasks and found that congenitally deaf cats, compared with hearing cats, have superior localization in the peripheral field and lower visual movement detection thresholds. In the deaf cats, reversible deactivation of posterior auditory cortex selectively eliminated superior visual localization abilities, whereas deactivation of the dorsal auditory cortex eliminated superior visual motion detection. Our results indicate that enhanced visual performance in the deaf is caused by cross-modal reorganization of deaf auditory cortex and it is possible to localize individual visual functions in discrete portions of reorganized auditory cortex.  相似文献   

7.
Information from the different senses is seamlessly integrated by the brain in order to modify our behaviors and enrich our perceptions. It is only through the appropriate binding and integration of information from the different senses that a meaningful and accurate perceptual gestalt can be generated. Although a great deal is known about how such cross-modal interactions influence behavior and perception in the adult, there is little knowledge as to the impact of aging on these multisensory processes. In the current study, we examined the speed of discrimination responses of aged and young individuals to the presentation of visual, auditory or combined visual-auditory stimuli. Although the presentation of multisensory stimuli speeded response times in both groups, the performance gain was significantly greater in the aged. Most strikingly, multisensory stimuli restored response times in the aged to those seen in young subjects to the faster of the two unisensory stimuli (i.e., visual). The current results suggest that despite the decline in sensory processing that accompanies aging, the use of multiple sensory channels may represent an effective compensatory strategy to overcome these unisensory deficits.  相似文献   

8.
It has been argued that both modality-specific and supramodal mechanisms dedicated to time perception underlie the estimation of interval durations. While it is generally assumed that early sensory areas are dedicated to modality-specific time estimation, we hypothesized that early sensory areas such as the primary visual cortex or the auditory cortex might be involved in time perception independently of the sensory modality of the input. To test this possibility, we examined whether disruption of the primary visual cortex or the auditory cortex would disrupt time estimation of auditory stimuli and visual stimuli using transcranial magnetic stimulation (TMS). We found that disruption of the auditory cortex impaired not only time estimation of auditory stimuli but also impaired that of visual stimuli to the same degree. This finding suggests a supramodal role of the auditory cortex in time perception. On the other hand, TMS over the primary visual cortex impaired performance only in visual time discrimination. These asymmetric contributions of the auditory and visual cortices in time perception may be explained by a superiority of the auditory cortex in temporal processing. Here, we propose that time is primarily encoded in the auditory system and that visual inputs are automatically encoded into an auditory representation in time discrimination tasks.  相似文献   

9.
Microelectrode studies in nonhuman primates and other mammals have demonstrated that many neurons in auditory cortex are excited by pure tone stimulation only when the tone's frequency lies within a narrow range of the audible spectrum. However, the effects of auditory cortex lesions in animals and humans have been interpreted as evidence against the notion that neuronal frequency selectivity is functionally relevant to frequency discrimination. Here we report psychophysical and anatomical evidence in favor of the hypothesis that fine-grained frequency resolution at the perceptual level relies on neuronal frequency selectivity in auditory cortex. An adaptive procedure was used to measure difference thresholds for pure tone frequency discrimination in five humans with focal brain lesions and eight normal controls. Only the patient with bilateral lesions of primary auditory cortex and surrounding areas showed markedly elevated frequency difference thresholds: Weber fractions for frequency direction discrimination ("higher"-"lower" pitch judgments) were about eightfold higher than Weber fractions measured in patients with unilateral lesions of auditory cortex, auditory midbrain, or dorsolateral frontal cortex; Weber fractions for frequency change discrimination ("same"-"different" pitch judgments) were about seven times higher. In contrast, pure-tone detection thresholds, difference thresholds for pure tone duration discrimination centered at 500 ms, difference thresholds for vibrotactile intensity discrimination, and judgments of visual line orientation were within normal limits or only mildly impaired following bilateral auditory cortex lesions. In light of current knowledge about the physiology and anatomy of primate auditory cortex and a review of previous lesion studies, we interpret the present results as evidence that fine-grained frequency processing at the perceptual level relies on the integrity of finely tuned neurons in auditory cortex.  相似文献   

10.
Our previous findings suggest that audio-visual synchrony perception is based on the matching of salient temporal features selected from each sensory modality through bottom-up segregation or by top-down attention to a specific spatial position. This study examined whether top-down attention to a specific feature value is also effective in selection of cross-modal matching features. In the first experiment, the visual stimulus was a pulse train in which a flash randomly appeared with a probability of 6.25, 12.5 or 25% for every 6.25 ms. Four flash colors randomly appeared with equal probability, and one of them was selected as the target color on each trial. The paired auditory stimulus was a single-pitch pip sequence that had the same temporal structure as the target color flashes, presented in synchrony with the target flashes (synchronous stimulus) or with a 250-ms relative shift (asynchronous stimuli). The task of the participants was synchrony-asynchrony discrimination, with the target color being indicated to the participant by a probe (with-probe condition) or not (without probe). In another control condition, there was no correlation between color and auditory signals (color-shuffled). In the second experiment, the roles of visual and auditory stimuli were exchanged. The results show that the performance of synchrony-asynchrony discrimination was worst for the color/pitch-shuffled condition, but best under the with-probe condition where the observer knew beforehand which color/pitch should be matched with the signal of the other modality. This suggests that top-down, feature-based attention can aid in feature selection for audio-visual synchrony discrimination even when the bottom-up segmentation processes cannot uniquely determine salient features. The observed feature-based selection, however, is not as effective as position-based selection.  相似文献   

11.
Repetition blindness (RB) is a visual deficit, wherein observers fail to perceive the second occurrence of a repeated item in a rapid serial visual presentation stream. Chen and Yeh (Psychon Bull Rev 15:404–408, 2008) recently observed a reduction of the RB effect when the repeated items were accompanied by two sounds. The current study further manipulated the pitch of the two sounds (same versus different) in order to examine whether this cross-modal facilitation effect is caused by the multisensory enhancement of the visual event by sound, or multisensory Gestalt (perceptual grouping) of a new representation formed by combining the visual and auditory inputs. The results showed robust facilitatory effects of sound on RB regardless of the pitch of the sounds (Experiment 1), despite an effort to further increase the difference in pitch (Experiment 2). Experiment 3 revealed a close link between participants’ awareness of pitch and the effect of pitch on the RB effect. We conclude that the facilitatory effect of sound on RB results from multisensory enhancement of the perception of visual events by auditory signals.  相似文献   

12.
Sensory neocortex is capable of considerable plasticity after sensory deprivation or damage to input pathways, especially early in development. Although plasticity can often be restorative, sometimes novel, ectopic inputs invade the affected cortical area. Invading inputs from other sensory modalities may compromise the original function or even take over, imposing a new function and preventing recovery. Using ferrets whose retinal axons were rerouted into auditory thalamus at birth, we were able to examine the effect of varying the degree of ectopic, cross-modal input on reorganization of developing auditory cortex. In particular, we assayed whether the invading visual inputs and the existing auditory inputs competed for or shared postsynaptic targets and whether the convergence of input modalities would induce multisensory processing. We demonstrate that although the cross-modal inputs create new visual neurons in auditory cortex, some auditory processing remains. The degree of damage to auditory input to the medial geniculate nucleus was directly related to the proportion of visual neurons in auditory cortex, suggesting that the visual and residual auditory inputs compete for cortical territory. Visual neurons were not segregated from auditory neurons but shared target space even on individual target cells, substantially increasing the proportion of multisensory neurons. Thus spatial convergence of visual and auditory input modalities may be sufficient to expand multisensory representations. Together these findings argue that early, patterned visual activity does not drive segregation of visual and auditory afferents and suggest that auditory function might be compromised by converging visual inputs. These results indicate possible ways in which multisensory cortical areas may form during development and evolution. They also suggest that rehabilitative strategies designed to promote recovery of function after sensory deprivation or damage need to take into account that sensory cortex may become substantially more multisensory after alteration of its input during development.  相似文献   

13.
To investigate possible cross-modal reorganization of the primary auditory cortex (field A1) in congenitally deaf cats, after years of auditory deprivation, multiunit activity and local field potentials were recorded in lightly anesthetized animals and compared with responses obtained in hearing cats. Local field potentials were also used for current source-density analyses. For visual stimulation, phase-reversal gratings of three to five different spatial frequencies and three to five different orientations were presented at the point of central vision. Peripheral visual field was tested using hand-held stimuli (light bar-shaped stimulus of different orientations, moved in different directions and flashed) typically used for neurophysiological characterization of visual fields. From 200 multiunit recordings, no response to visual stimuli could be found in A1 of any of the investigated animals. Using the current source-density analysis of local field potentials, no local generators of field potentials could be found within A1, despite of the presence of small local field potentials. No multiunit responses to somatosensory stimulation (whiskers, face, pinna, head, neck, all paws, back, tail) could be obtained. In conclusion, there were no indications for a cross-modal reorganization (visual, somatosensory) of area A1 in congenitally deaf cats.  相似文献   

14.
Perceptual objects often comprise a visual and auditory signature that arrives simultaneously through distinct sensory channels, and cross-modal features are linked by virtue of being attributed to a specific object. Continued exposure to cross-modal events sets up expectations about what a given object most likely "sounds" like, and vice versa, thereby facilitating object detection and recognition. The binding of familiar auditory and visual signatures is referred to as semantic, multisensory integration. Whereas integration of semantically related cross-modal features is behaviorally advantageous, situations of sensory dominance of one modality at the expense of another impair performance. In the present study, magnetoencephalography recordings of semantically related cross-modal and unimodal stimuli captured the spatiotemporal patterns underlying multisensory processing at multiple stages. At early stages, 100 ms after stimulus onset, posterior parietal brain regions responded preferentially to cross-modal stimuli irrespective of task instructions or the degree of semantic relatedness between the auditory and visual components. As participants were required to classify cross-modal stimuli into semantic categories, activity in superior temporal and posterior cingulate cortices increased between 200 and 400 ms. As task instructions changed to incorporate cross-modal conflict, a process whereby auditory and visual components of cross-modal stimuli were compared to estimate their degree of congruence, multisensory processes were captured in parahippocampal, dorsomedial, and orbitofrontal cortices 100 and 400 ms after stimulus onset. Our results suggest that multisensory facilitation is associated with posterior parietal activity as early as 100 ms after stimulus onset. However, as participants are required to evaluate cross-modal stimuli based on their semantic category or their degree of congruence, multisensory processes extend in cingulate, temporal, and prefrontal cortices.  相似文献   

15.
Pigeons were trained on a cross-modal conditional discrimination task in which the subjects were required to select visual comparison stimuli depending upon auditory conditional stimuli. In simultaneous condition, the auditory conditional stimuli were presented until the subjects selected one of the two visual stimuli. All subjects attained above 80% correct responses within 21-64 sessions. After simultaneous condition, all subjects were trained in 0-1-2- and 3-s delayed conditions. They showed above 80%-90% correct responses in zero delayed training. In 1-s and 2-s delayed conditions, the percentage of correct responses was not significantly different in comparison with 0-s delayed condition. The results indicated that pigeons could learn audio-visual cross-modal conditional discrimination task both in simultaneous and delayed conditions and had no greater difficulty learning them than learning visual (unimodal) conditional discrimination.  相似文献   

16.
Recent findings suggest that neural representations in early auditory cortex reflect not only the physical properties of a stimulus, but also high-level, top-down, and even cross-modal information. However, the nature of cross-modal information in auditory cortex remains poorly understood. Here, we used pattern analyses of fMRI data to ask whether early auditory cortex contains information about the visual environment. Our data show that 1) early auditory cortex contained information about a visual stimulus when there was no bottom-up auditory signal, and that 2) no influence of visual stimulation was observed in auditory cortex when visual stimuli did not provide a context relevant to audition. Our findings attest to the capacity of auditory cortex to reflect high-level, top-down, and cross-modal information and indicate that the spatial patterns of activation in auditory cortex reflect contextual/implied auditory information but not visual information per se.  相似文献   

17.
The electrotactile two point discrimination threshold (TPDT) was considered as a design concept for multichannel artifical sensory communication displays. Data relating two point discrimination threshold with frequency for three stimulation codes were used for the analysis and specifications of three classes of optimal displays: space optimal, frequency optimal, and space-frequency optimal. A table was constructed showing alternative display configurations for various applications, and a design procedure for optimizing each class of display was developed. Possible applications of each display class for various sensory augmentation requirements in rehabilitation of handicapped persons for tactile, kinesthetic, visual, and auditory categories have been identified.  相似文献   

18.
Evidence of peripheral filtering of auditory information at the cochlear and brainstem levels was sought using brainstem auditory evoked potentials (BAEPs) recorded during auditory and visual tasks. It can be argued that the discrimination tasks used in the past to investigate peripheral filtering of sensory information in humans involve two levels of discrimination, the consequences of which result in two independent types of inhibition: crossmodal inhibition as a result of between-modality discrimination, and intramodal inhibition as a result of within-modality discrimination. Therefore, the observed effects on the BAEPs may reflect the extent to which these two types of inhibition are engaged. In this investigation a paradigm that included two non-discrimination (passive) tasks and two discrimination (active) tasks was employed. BAEPs recorded during listening (a passive auditory task) provided a baseline measure, against which comparisons of BAEPs recorded during auditory and visual discrimination could be made for independent evidence of crossmodal and intramodal inhibition. The data in this study did not support the presence of two types of inhibition proposed above, or show evidence of peripheral filtering of auditory information at the cochlear and brainstem levels. However, the sensitivity of BAEPs to efferent system activation at the cochlea and hence their value as a tool in investigations of peripheral filtering in humans was questioned.  相似文献   

19.
The brain combines information from different senses to improve performance on perceptual tasks. For instance, auditory processing is enhanced by the mere fact that a visual input is processed simultaneously. However, the sensory processing of one modality is itself subject to diverse influences. Namely, perceptual processing depends on the degree to which a stimulus is predicted. The present study investigated the extent to which the influence of one processing pathway on another pathway depends on whether or not the stimulation in this pathway is predicted. We used an action–effect paradigm to vary the match between incoming and predicted visual stimulation. Participants triggered a bimodal stimulus composed of a Gabor and a tone. The Gabor was either congruent or incongruent compared to an action–effect association that participants learned in an acquisition phase.We tested the influence of action–effect congruency on the loudness perception of the tone. We observed that an incongruent–task-irrelevant Gabor stimulus increases participant’s sensitivity to loudness discrimination. An identical result was obtained for a second condition in which the visual stimulus was predicted by a cue instead of an action. Our results suggest that prediction error is a driving factor of the crossmodal interplay between vision and audition.  相似文献   

20.
Short latency evoked potentials were recorded during a cross-modal selective attention task to evaluate recent proposals that sensory transmission in the peripheral auditory and visual pathways can be modified selectively by centrifugal mechanisms in humans. Twenty young adult subjects attended in turn to either left-ear tones or right-field flashes presented in a randomized sequence, in order to detect infrequent, lower-intensity targets. Attention-related enhancement of longer-latency components, including the visual P105 and the auditory N1/Nd waves and T-complex, showed that subjects were able to adopt a selective sensory set toward either modality. Neither the auditory evoked brainstem potentials nor the early visual components (electroretinogram, occipito-temporal N40, P50, N70 waves) were significantly affected by attention. Measures of retinal B-waves were significantly reduced in amplitude when attention was directed to the flashes, but concurrent recordings of eyelid electromyographic activity and the electro-oculogram indicated that this effect may have resulted from contamination of the retinal recordings by blink microreflex activity. A trend toward greater positivity in the 15-50 ms latency range for auditory evoked potentials to attended tones was observed. These results provide further evidence that the earliest levels of sensory transmission are unaffected by cross-modal selective attention, but that longer latency exogenous and endogenous potentials are enhanced to stimuli in the attended modality.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号