首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 80 毫秒
1.
Crossmodal spatial integration between auditory and visual stimuli is a common phenomenon in space perception. The principles underlying such integration have been outlined by neurophysiological and behavioral studies in animals; this study investigated whether the integrative effects observed in animals also apply to humans. In this experiment we systematically varied the spatial disparity (0°, 16°, and 32°) and the temporal interval (0, 100, 200, 300, 400, and 500 ms) between the visual and the auditory stimuli. Normal subjects were required to detect visual stimuli presented below threshold either in unimodal visual conditions or in crossmodal audiovisual conditions. Signal detection measures were used. An enhancement of the perceptual sensitivity (d) for luminance detection was found when the audiovisual stimuli followed a simple spatial and temporal rule, governing multisensory integration at the neuronal level.  相似文献   

2.
It has been shown that stimuli of a task-irrelevant modality receive enhanced processing when they are presented at an attended location in space (crossmodal attention). The present study investigated the effects of visual deprivation on the interaction of the intact sensory systems. Random streams of tactile and auditory stimuli were presented at the left or right index finger of congenitally blind participants. They had to attend to one modality (auditory or tactile) of one side (left or right) and had to respond to deviant stimuli of the attended modality and side. While in a group of sighted participants, early event-related potentials (ERPs) were negatively displaced to stimuli presented at the attended position, compared to the unattended, for both the task-relevant and the task-irrelevant modality, starting as early as 80 ms after stimulus onset (unimodal and crossmodal spatial attention effects, respectively), corresponding crossmodal effects could not be detected in the blind. In the sighted, spatial attention effects after 200 ms were only significant for the task-relevant modality, whereas a crossmodal effect for this late time window was observed in the blind. This positive rather than negative effect possibly indicates an active suppression of task-irrelevant stimuli at an attended location in space. The present data suggest that developmental visual input is essential for the use of space to integrate input of the non-visual modalities, possibly because of its high spatial resolution. Alternatively, enhanced perceptual skills of the blind within the intact modalities may result in reduced multisensory interactions ("inverse effectiveness of multisensory integration").  相似文献   

3.
We investigated the extent to which intramodal visual perceptual grouping influences the multisensory integration (or grouping) of auditory and visual motion information. Participants discriminated the direction of motion of two sequentially presented sounds (moving leftward or rightward), while simultaneously trying to ignore a task-irrelevant visual apparent motion stream. The principles of perceptual grouping were used to vary the direction and extent of apparent motion within the irrelevant modality (vision). The results demonstrate that the multisensory integration of motion information can be modulated by the perceptual grouping taking place unimodally within vision, suggesting that unimodal perceptual grouping processes precede multisensory integration. The present study therefore illustrates how intramodal and crossmodal perceptual grouping processes interact to determine how the information in complex multisensory environments is parsed.  相似文献   

4.
Temporally synchronous, auditory cues can facilitate participants’ performance on dynamic visual search tasks. Making auditory cues spatially informative with regard to the target location can reduce search latencies still further. In the present study, we investigated how multisensory integration, and temporal and spatial attention, might conjointly influence participants’ performance on an elevation discrimination task for a masked visual target presented in a rapidly-changing sequence of masked visual distractors. Participants were presented with either spatially uninformative (centrally presented), spatially valid (with the target side), or spatially invalid tones that were synchronous with the presentation of the visual target. Participants responded significantly more accurately following the presentation of the spatially valid as compared to the uninformative or invalid auditory cues. Participants endogenously shifted their attention to the likely location of the target indicated by the valid spatial auditory cue (reflecting top-down, cognitive processing mechanisms), which facilitated their processing of the visual target over and above any bottom-up benefits associated solely with the synchronous presentation of the auditory and visual stimuli. The results of the present study therefore suggest that crossmodal attention (both spatial and temporal) and multisensory integration can work in parallel to facilitate people's ability to most efficiently respond to multisensory information.  相似文献   

5.
Perceptual grouping impairs temporal resolution   总被引:1,自引:1,他引:0  
Performance on multisensory temporal order judgment (TOJ) tasks is enhanced when the sensory stimuli are presented at different locations rather than the same location. In our first experiment, we replicated this result for spatially separated stimuli within the visual modality. In Experiment 2, we investigated the effect of perceptual grouping on this spatial effect. Observers performed a visual TOJ task in which two stimuli were presented in a configuration that encouraged perceptual grouping or not (i.e., one- and two-object conditions respectively). Despite a constant spatial disparity between targets across the two conditions, a smaller just noticeable difference (i.e., better temporal resolution) was found when the two targets formed two objects than when they formed one. This effect of perceptual grouping persisted in Experiment 3 when we controlled for apparent motion by systematically varying the spatial distance between the targets. Thus, in contrast to the putative same-object advantage observed in spatial discrimination tasks, these findings indicate that perceptual grouping impairs visual temporal resolution.  相似文献   

6.
The brain integrates information from multiple sensory modalities and, through this process, generates a coherent and apparently seamless percept of the external world. Although multisensory integration typically binds information that is derived from the same event, when multisensory cues are somewhat discordant they can result in illusory percepts such as the ventriloquism effect. These biases in stimulus localization are generally accompanied by the perceptual unification of the two stimuli. In the current study, we sought to further elucidate the relationship between localization biases, perceptual unification and measures of a participants uncertainty in target localization (i.e., variability). Participants performed an auditory localization task in which they were also asked to report on whether they perceived the auditory and visual stimuli to be perceptually unified. The auditory and visual stimuli were delivered at a variety of spatial (0°, 5°, 10°, 15°) and temporal (200, 500, 800 ms) disparities. Localization bias and reports of perceptual unity occurred even with substantial spatial (i.e., 15°) and temporal (i.e., 800 ms) disparities. Trial-by-trial comparison of these measures revealed a striking correlation: regardless of their disparity, whenever the auditory and visual stimuli were perceived as unified, they were localized at or very near the light. In contrast, when the stimuli were perceived as not unified, auditory localization was often biased away from the visual stimulus. Furthermore, localization variability was significantly less when the stimuli were perceived as unified. Intriguingly, on non-unity trials such variability increased with decreasing disparity. Together, these results suggest strong and potentially mechanistic links between the multiple facets of multisensory integration that contribute to our perceptual Gestalt.  相似文献   

7.
Active tool use in human and non-human primates has been claimed to alter the neural representations of multisensory peripersonal space. To date, most studies suggest that a short period of tool use leads to an expansion or elongation of these spatial representations, which lasts several minutes after the last tool use action. However, the possibility that multisensory interactions also change on a much shorter time scale following or preceding individual tool use movements has not yet been investigated. We measured crossmodal (visual-tactile) congruency effects as an index of multisensory integration during two tool use tasks. In the regular tool use task, the participants used one of two tools in a spatiotemporally predictable sequence after every fourth crossmodal congruency trial. In the random tool use task, the required timing and spatial location of the tool use task varied unpredictably. Multisensory integration effects increased as a function of the number of trials since tool use in the regular tool use group, but remained relatively constant in the random tool use group. The spatial distribution of these multisensory effects, however, was unaffected by tool use predictability, with significant spatial interactions found only near the hands and at the tips of the tools. These data suggest that endogenously preparing to use a tool enhances visual-tactile interactions near the tools. Such enhancements are likely due to the increased behavioural relevance of visual stimuli as each tool use action is prepared before execution.  相似文献   

8.
Synesthetic congruency modulates the temporal ventriloquism effect   总被引:2,自引:0,他引:2  
People sometimes find it easier to judge the temporal order in which two visual stimuli have been presented if one tone is presented before the first visual stimulus and a second tone is presented after the second visual stimulus. This enhancement of people's visual temporal sensitivity has been attributed to the temporal ventriloquism of the visual stimuli toward the temporally proximate sounds, resulting in an expansion of the perceived interval between the two visual events. In the present study, we demonstrate that the synesthetic congruency between the auditory and visual stimuli (in particular, between the relative pitch of the sounds and the relative size of the visual stimuli) can modulate the magnitude of this multisensory integration effect: The auditory capture of vision is larger for pairs of auditory and visual stimuli that are synesthetically congruent than for pairs of stimuli that are synesthetically incongruent, as reflected by participants' increased sensitivity in discriminating the temporal order of the visual stimuli. These results provide the first evidence that multisensory temporal integration can be affected by the synesthetic congruency between the auditory and visual stimuli that happen to be presented.  相似文献   

9.
The importance of multisensory integration for human behavior and perception is well documented, as is the impact that temporal synchrony has on driving such integration. Thus, the more temporally coincident two sensory inputs from different modalities are, the more likely they will be perceptually bound. This temporal integration process is captured by the construct of the temporal binding window—the range of temporal offsets within which an individual is able to perceptually bind inputs across sensory modalities. Recent work has shown that this window is malleable and can be narrowed via a multisensory perceptual feedback training process. In the current study, we seek to extend this by examining the malleability of the multisensory temporal binding window through changes in unisensory experience. Specifically, we measured the ability of visual perceptual feedback training to induce changes in the multisensory temporal binding window. Visual perceptual training with feedback successfully improved temporal visual processing, and more importantly, this visual training increased the temporal precision across modalities, which manifested as a narrowing of the multisensory temporal binding window. These results are the first to establish the ability of unisensory temporal training to modulate multisensory temporal processes, findings that can provide mechanistic insights into multisensory integration and which may have a host of practical applications.  相似文献   

10.
Assessing intentions, direction, and velocity of others is necessary for most daily tasks, and such information is often made available by both visual and auditory motion cues. Therefore, it is not surprising our great ability to perceive human motion. Here, we explore the multisensory integration of cues of biological motion walking speed. After testing for audiovisual asynchronies (visual signals led auditory ones by 30?ms in simultaneity temporal windows of 76.4?ms), in the main experiment, visual, auditory, and bimodal stimuli were compared to a standard audiovisual walker in a velocity discrimination task. Results in variance reduction conformed to optimal integration of congruent bimodal stimuli across all subjects. Interestingly, the perceptual judgements were still close to optimal for stimuli at the smallest level of incongruence. Comparison of slopes allows us to estimate an integration window of about 60?ms, which is smaller than that reported in audiovisual speech.  相似文献   

11.
12.
This study reports an experiment investigating the relative effects of intramodal, crossmodal and bimodal cues on visual and auditory temporal order judgements. Pairs of visual or auditory targets, separated by varying stimulus onset asynchronies, were presented to either side of a central fixation (±45°), and participants were asked to identify the target that had occurred first. In some of the trials, one of the targets was preceded by a short, non-predictive visual, auditory or audiovisual cue stimulus. The cue and target stimuli were presented at the exact same locations in space. The point of subjective simultaneity revealed a consistent spatiotemporal bias towards targets at the cued location. For the visual targets, the intramodal cue elicited the largest, and the crossmodal cue the smallest, bias. The bias elicited by the bimodal cue fell between the intramodal and crossmodal cue biases, with significant differences between all cue types. The pattern for the auditory targets was similar apart from a scaling factor and greater variance, so the differences between the cue conditions did not reach significance. These results provide evidence for multisensory integration in exogenous attentional cueing. The magnitude of the bimodal cueing effect was equivalent to the average of the facilitation elicited by the intramodal and crossmodal cues. Under the assumption that the visual and auditory cues were equally informative, this is consistent with the notion that exogenous attention, like perception, integrates multimodal information in an optimal way.  相似文献   

13.
People treated for bilateral congenital cataracts offer a model to study the influence of visual deprivation in early infancy on visual and multisensory development. We investigated cross-modal integration capabilities in cataract patients using a simple detection task that provided redundant information to two different senses. In both patients and controls, redundancy gains were consistent with coactivation models, indicating an integrated processing of modality-specific information. This finding is in contrast with recent studies showing impaired higher-level multisensory interactions in cataract patients. The present results suggest that basic cross-modal integrative processes for simple short stimuli do not depend on visual and/or crossmodal input since birth.  相似文献   

14.
We adapted the crossmodal dynamic capture task to investigate the modulation of visuotactile crossmodal integration by unimodal visual perceptual grouping. The influence of finger posture on this interaction was also explored. Participants were required to judge the direction of a tactile apparent motion stream (moving either to the left or to the right) presented to their crossed or uncrossed index fingers. The participants were instructed to ignore a distracting visual apparent motion stream, comprised of either 2 or 6 lights presented concurrently with the tactile stimuli. More crossmodal dynamic capture of the direction of the tactile apparent motion stream by the visual apparent motion stream was observed in the 2-lights condition than in the 6-lights condition. This interaction was not modulated by finger posture. These results suggest that visual intramodal perceptual grouping constrains the crossmodal binding of visual and tactile apparent motion information, irrespective of finger posture.  相似文献   

15.
Hu H  Li J  Zhang Z  Yu M 《Neuroscience letters》2011,500(1):10-15
Visuo-tactile integration occurs in a privileged way in peripersonal space, namely when visual and tactile stimuli are in spatial proximity. Here, we investigated whether crossmodal spatial effects (i.e. stronger crossmodal interactions for spatially congruent compared to incongruent visual and tactile stimuli) are also present when visual stimuli presented near the body are indirectly viewed in a mirror, thus appearing in far space. Participants had to attend to one of their hands throughout a block of stimuli in order to detect infrequent tactile target stimuli at that hand while ignoring tactile targets at the unattended hand, all tactile non-target stimuli, and any visual stimuli. Visual stimuli were presented simultaneously with tactile stimuli, in the same (congruent) or opposite (incongruent) hemispace with respect to the tactile stimuli. In one group of participants the visual stimuli were delivered near the participants' hands and were observed as indirect mirror reflections ('mirror' condition), while in the other group these were presented at a distance from the hands ('far' condition). The main finding was that crossmodal spatial modulations of ERPs recorded over and close to somatosensory cortex were present in the 'mirror' condition but not the 'far' condition. That is, ERPs were enhanced in response to tactile stimuli coupled with spatially congruent versus incongruent visual stimuli when the latter were viewed through a mirror. These effects emerged around 190 ms after stimuli onset, and were modulated by the focus of spatial attention. These results provide evidence that visual stimuli observed in far space via a mirror are coded as near-the-body stimuli according to their known rather than to their perceived location. This suggests that crossmodal interactions between vision and touch may be modulated by previous knowledge of reflecting surfaces (i.e. top-down processing).  相似文献   

16.
Autism spectrum disorder is typically associated with social deficits and is often specifically linked to difficulty with processing faces and other socially relevant stimuli. Emerging research has suggested that children with autism might also have deficits in basic perceptual abilities including multisensory processing (e.g., simultaneously processing visual and auditory inputs). The current study examined the relationship between multisensory temporal processing (assessed via a simultaneity judgment task wherein participants were to report whether a visual stimulus and an auditory stimulus occurred at the same time or at different times) and self-reported symptoms of autism (assessed via the Autism Spectrum Quotient questionnaire). Data from over 100 healthy adults revealed a relationship between these two factors as multisensory timing perception correlated with symptoms of autism. Specifically, a stronger bias to perceive auditory stimuli occurring before visual stimuli as simultaneous was associated with greater levels of autistic symptoms. Additional data and analyses confirm that this relationship is specific to multisensory processing and symptoms of autism. These results provide insight into the nature of multisensory processing while also revealing a continuum over which perceptual abilities correlate with symptoms of autism and that this continuum is not just specific to clinical populations but is present within the general population.  相似文献   

17.
Previous research has revealed the existence of perceptual mechanisms that compensate for slight temporal asynchronies between auditory and visual signals. We investigated whether temporal recalibration would also occur between auditory and tactile stimuli. Participants were exposed to streams of brief auditory and tactile stimuli presented in synchrony, or else with the auditory stimulus leading by 75 ms. After the exposure phase, the participants made temporal order judgments regarding pairs of auditory and tactile events occurring at varying stimulus onset asynchronies. The results showed that the minimal interval necessary to correctly resolve audiotactile temporal order was larger after exposure to the desynchronized streams than after exposure to the synchronous streams. This suggests the existence of a mechanism to compensate for audiotactile asynchronies that results in a widening of the temporal window for multisensory integration.  相似文献   

18.
The spatial register of the different receptive fields of multisensory neurons in the superior colliculus (SC) plays a significant role in determining the responses of these neurons to cross-modal stimulus combinations. Spatially coincident visual-auditory stimuli fall within these overlapping receptive fields and generally produce response enhancements that exceed the individual modality-specific responses and can exceed their sum. Yet, in this context, it has not been clear how "spatial coincidence" is operationally defined. Given the large size of SC receptive fields, visual and auditory stimuli could be within their respective receptive fields even when there are substantial spatial disparities between them. Indeed, previous observations have raised the possibility that there may be a second level of determinism in how SC neurons deal with the relative spatial locations of within-field cross-modal stimuli; specifically, that multisensory response enhancements become progressively weaker as the within-field visual and auditory stimuli become increasingly disparate. While the present experiments demonstrated that SC multisensory neurons have heterogeneous receptive fields, and that the greatest number of impulses evoked were by stimuli that fell within the area of cross-modal receptive field overlap, they also indicate that there is no systematic relationship between cross-modal stimulus disparity and the magnitude of multisensory response enhancement. Thus, two within-field cross-modal stimuli produced the same proportionate change (i.e., multisensory response enhancement) when they were widely disparate as they did when they overlapped one another in space. These observations indicate that cross-modal spatial coincidence can be defined operationally by the borders of an SC neuron's receptive fields regardless of the size of those receptive fields and/or the absolute spatial disparity between within-field cross-modal stimuli. Electronic Publication  相似文献   

19.
The present study used steady‐state visual evoked potentials (SSVEPs) recorded in parallel to a task‐relevant auditory/visual stimulation to study the process of intermodal and crossmodal spatial attention on visual processing. SSVEPs were elicited by task‐irrelevant 10/15Hz pattern‐reversing checkerboards. The participants were asked to respond to deviant transient stimuli of the attended side in the attended modality only. A phase‐locking index (PLI) method was employed to characterize SSVEPs. Both unimodal and crossmodal spatial attention resulted in an increase of PLI values over contralateral occipital brain regions. Intermodal attention effects were observed as an increase of the PLI over the same brain areas when the auditory rather than the visual modality was attended. These findings support recent hypotheses that the phase resetting of the brain activity in early sensory cortices is an essential mechanism of multisensory interaction.  相似文献   

20.
Four experiments investigated the effects of cross-modal attention between vision and touch in temporal order judgment tasks combined with spatial cueing paradigm. In Experiment 1, two vibrotactile stimuli with simultaneous or successive onsets were presented bimanually to the left and right index fingers and participants were asked to judge the temporal order of the two stimuli. The tactile stimuli were preceded by a spatially uninformative visual cue. Results indicated that shift of spatial attention yielded by visual cueing resulted in the modulation of accuracy of the subsequent tactile temporal order judgment. However, this cueing effect disappeared when participants judged simultaneity of the two stimuli, instead of their temporal order (Experiment 2) or when the cue lead time between the visual cue and the stimuli was relatively long (Experiment 3). Experiment 4 replicated an effect of crossmodal attention on the direction of visual illusory line motion induced by a somatosensory cue (Shimojo, Miyauchi, & Hikosaka, 1997). These results demonstrate that substantial crossmodal links exist between vision and touch for covert exogenous orienting of attention.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号