首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We adapted the crossmodal dynamic capture task to investigate the modulation of visuotactile crossmodal integration by unimodal visual perceptual grouping. The influence of finger posture on this interaction was also explored. Participants were required to judge the direction of a tactile apparent motion stream (moving either to the left or to the right) presented to their crossed or uncrossed index fingers. The participants were instructed to ignore a distracting visual apparent motion stream, comprised of either 2 or 6 lights presented concurrently with the tactile stimuli. More crossmodal dynamic capture of the direction of the tactile apparent motion stream by the visual apparent motion stream was observed in the 2-lights condition than in the 6-lights condition. This interaction was not modulated by finger posture. These results suggest that visual intramodal perceptual grouping constrains the crossmodal binding of visual and tactile apparent motion information, irrespective of finger posture.  相似文献   

2.
Perceptual grouping impairs temporal resolution   总被引:1,自引:1,他引:0  
Performance on multisensory temporal order judgment (TOJ) tasks is enhanced when the sensory stimuli are presented at different locations rather than the same location. In our first experiment, we replicated this result for spatially separated stimuli within the visual modality. In Experiment 2, we investigated the effect of perceptual grouping on this spatial effect. Observers performed a visual TOJ task in which two stimuli were presented in a configuration that encouraged perceptual grouping or not (i.e., one- and two-object conditions respectively). Despite a constant spatial disparity between targets across the two conditions, a smaller just noticeable difference (i.e., better temporal resolution) was found when the two targets formed two objects than when they formed one. This effect of perceptual grouping persisted in Experiment 3 when we controlled for apparent motion by systematically varying the spatial distance between the targets. Thus, in contrast to the putative same-object advantage observed in spatial discrimination tasks, these findings indicate that perceptual grouping impairs visual temporal resolution.  相似文献   

3.
Multisensory interactions between haptics and vision remain poorly understood. Previous studies have shown that shapes, such as letters of the alphabet, when drawn on the skin, are differently perceived dependent upon which body part is stimulated and on how the stimulated body part, such as the hand, is positioned. Another line of research within this area has investigated multisensory interactions. Tactile perceptions, for example, have the potential to disambiguate visually perceived information. While the former studies focused on explicit reports about tactile perception, the latter studies relied on fully aligned multisensory stimulus dimensions. In this study, we investigated to what extent rotating tactile stimulations on the hand affect directional visual motion judgments implicitly and without any spatial stimulus alignment. We show that directional tactile cues and ambiguous visual motion cues are integrated, thus biasing the judgment of visually perceived motion. We further show that the direction of the tactile influence depends on the position and orientation of the stimulated part of the hand relative to a head-centered frame of reference. Finally, we also show that the time course of the cue integration is very versatile. Overall, the results imply immediate directional cue integration within a head-centered frame of reference.  相似文献   

4.
We investigated perceptual learning in self-motion perception. Blindfolded participants were displaced leftward or rightward by means of a motion platform and asked to indicate the direction of motion. A total of eleven participants underwent 3,360 practice trials, distributed over twelve (Experiment 1) or 6 days (Experiment 2). We found no improvement in motion discrimination in both experiments. These results are surprising since perceptual learning has been demonstrated for visual, auditory, and somatosensory discrimination. Improvements in the same task were found when visual input was provided (Experiment 3). The multisensory nature of vestibular information is discussed as a possible explanation of the absence of perceptual learning in darkness.  相似文献   

5.
The importance of multisensory integration for human behavior and perception is well documented, as is the impact that temporal synchrony has on driving such integration. Thus, the more temporally coincident two sensory inputs from different modalities are, the more likely they will be perceptually bound. This temporal integration process is captured by the construct of the temporal binding window—the range of temporal offsets within which an individual is able to perceptually bind inputs across sensory modalities. Recent work has shown that this window is malleable and can be narrowed via a multisensory perceptual feedback training process. In the current study, we seek to extend this by examining the malleability of the multisensory temporal binding window through changes in unisensory experience. Specifically, we measured the ability of visual perceptual feedback training to induce changes in the multisensory temporal binding window. Visual perceptual training with feedback successfully improved temporal visual processing, and more importantly, this visual training increased the temporal precision across modalities, which manifested as a narrowing of the multisensory temporal binding window. These results are the first to establish the ability of unisensory temporal training to modulate multisensory temporal processes, findings that can provide mechanistic insights into multisensory integration and which may have a host of practical applications.  相似文献   

6.
Assessing intentions, direction, and velocity of others is necessary for most daily tasks, and such information is often made available by both visual and auditory motion cues. Therefore, it is not surprising our great ability to perceive human motion. Here, we explore the multisensory integration of cues of biological motion walking speed. After testing for audiovisual asynchronies (visual signals led auditory ones by 30?ms in simultaneity temporal windows of 76.4?ms), in the main experiment, visual, auditory, and bimodal stimuli were compared to a standard audiovisual walker in a velocity discrimination task. Results in variance reduction conformed to optimal integration of congruent bimodal stimuli across all subjects. Interestingly, the perceptual judgements were still close to optimal for stimuli at the smallest level of incongruence. Comparison of slopes allows us to estimate an integration window of about 60?ms, which is smaller than that reported in audiovisual speech.  相似文献   

7.
This study reports an experiment investigating the relative effects of intramodal, crossmodal and bimodal cues on visual and auditory temporal order judgements. Pairs of visual or auditory targets, separated by varying stimulus onset asynchronies, were presented to either side of a central fixation (±45°), and participants were asked to identify the target that had occurred first. In some of the trials, one of the targets was preceded by a short, non-predictive visual, auditory or audiovisual cue stimulus. The cue and target stimuli were presented at the exact same locations in space. The point of subjective simultaneity revealed a consistent spatiotemporal bias towards targets at the cued location. For the visual targets, the intramodal cue elicited the largest, and the crossmodal cue the smallest, bias. The bias elicited by the bimodal cue fell between the intramodal and crossmodal cue biases, with significant differences between all cue types. The pattern for the auditory targets was similar apart from a scaling factor and greater variance, so the differences between the cue conditions did not reach significance. These results provide evidence for multisensory integration in exogenous attentional cueing. The magnitude of the bimodal cueing effect was equivalent to the average of the facilitation elicited by the intramodal and crossmodal cues. Under the assumption that the visual and auditory cues were equally informative, this is consistent with the notion that exogenous attention, like perception, integrates multimodal information in an optimal way.  相似文献   

8.
Information from the different senses is seamlessly integrated by the brain in order to modify our behaviors and enrich our perceptions. It is only through the appropriate binding and integration of information from the different senses that a meaningful and accurate perceptual gestalt can be generated. Although a great deal is known about how such cross-modal interactions influence behavior and perception in the adult, there is little knowledge as to the impact of aging on these multisensory processes. In the current study, we examined the speed of discrimination responses of aged and young individuals to the presentation of visual, auditory or combined visual-auditory stimuli. Although the presentation of multisensory stimuli speeded response times in both groups, the performance gain was significantly greater in the aged. Most strikingly, multisensory stimuli restored response times in the aged to those seen in young subjects to the faster of the two unisensory stimuli (i.e., visual). The current results suggest that despite the decline in sensory processing that accompanies aging, the use of multiple sensory channels may represent an effective compensatory strategy to overcome these unisensory deficits.  相似文献   

9.
Research on multisensory interactions has shown that the perceived timing of a visual event can be captured by a temporally proximal sound. This effect has been termed ‘temporal ventriloquism effect.’ Using the Ternus display, we systematically investigated how auditory configurations modulate the visual apparent-motion percepts. The Ternus display involves a multielement stimulus that can induce either of two different percepts of apparent motion: ‘element motion’ or ‘group motion’. We found that two sounds presented in temporal proximity to, or synchronously with, the two visual frames, respectively, can shift the transitional threshold for visual apparent motion (Experiments 1 and 3). However, such effects were not evident with single-sound configurations (Experiment 2). A further experiment (Experiment 4) provided evidence that time interval information is an important factor for crossmodal interaction of audiovisual Ternus effect. The auditory interval was perceived as longer than the same physical visual interval in the sub-second range. Furthermore, the perceived audiovisual interval could be predicted by optimal integration of the visual and auditory intervals.  相似文献   

10.
Neurophysiological studies have shown in animals that a sudden sound enhanced perceptual processing of subsequent visual stimuli. In the present study, we explored the possibility that such enhancement also exists in humans and can be explained through crossmodal integration effects, whereby the interaction occurs at the level of bimodal neurons. Subjects were required to detect visual stimuli in a unimodal visual condition or in crossmodal audio-visual conditions. The spatial and the temporal proximity of multisensory stimuli were systematically varied. An enhancement of the perceptual sensitivity (d') for luminance detection was found when the audiovisual stimuli followed a rather clear spatial and temporal rule, governing multisensory integration at the neuronal level. Electronic Publication  相似文献   

11.
We process information from the world through multiple senses, and the brain must decide what information belongs together and what information should be segregated. One challenge in studying such multisensory integration is how to quantify the multisensory interactions, a challenge that is amplified by the host of methods that are now used to measure neural, behavioral, and perceptual responses. Many of the measures that have been developed to quantify multisensory integration (and which have been derived from single unit analyses), have been applied to these different measures without much consideration for the nature of the process being studied. Here, we provide a review focused on the means with which experimenters quantify multisensory processes and integration across a range of commonly used experimental methodologies. We emphasize the most commonly employed measures, including single- and multiunit responses, local field potentials, functional magnetic resonance imaging, and electroencephalography, along with behavioral measures of detection, accuracy, and response times. In each section, we will discuss the different metrics commonly used to quantify multisensory interactions, including the rationale for their use, their advantages, and the drawbacks and caveats associated with them. Also discussed are possible alternatives to the most commonly used metrics.  相似文献   

12.
The coding of body part location may depend upon both visual and proprioceptive information, and allows targets to be localized with respect to the body. The present study investigates the interaction between visual and proprioceptive localization systems under conditions of multisensory conflict induced by optokinetic stimulation (OKS). Healthy subjects were asked to estimate the apparent motion speed of a visual target (LED) that could be located either in the extrapersonal space (visual encoding only, V), or at the same distance, but stuck on the subject’s right index finger-tip (visual and proprioceptive encoding, V–P). Additionally, the multisensory condition was performed with the index finger kept in position both passively (V–P passive) and actively (V–P active). Results showed that the visual stimulus was always perceived to move, irrespective of its out- or on-the-body location. Moreover, this apparent motion speed varied consistently with the speed of the moving OKS background in all conditions. Surprisingly, no differences were found between V–P active and V–P passive conditions in the speed of apparent motion. The persistence of the visual illusion during the active posture maintenance reveals a novel condition in which vision totally dominates over proprioceptive information, suggesting that the hand-held visual stimulus was perceived as a purely visual, external object despite its contact with the hand.  相似文献   

13.
Recent research suggests that multisensory integration may occur at an early phase in sensory processing and within cortical regions traditionally though to be exclusively unisensory. Evidence from perceptual and electrophysiological studies indicate that the cross modal temporal correspondence of multisensory stimuli plays a fundamental role in the cortical integration of information across separate sensory modalities. Further, oscillatory neural activity in sensory cortices may provide the principle mechanism whereby sensory information from separate modalities is integrated.  相似文献   

14.
In real life, the human brain usually receives information through visual and auditory channels and processes the multisensory information, but studies on the integration processing of the dynamic visual and auditory information are relatively few. In this paper, we have designed an experiment, where through the presentation of common scenario, real-world videos, with matched and mismatched actions (images) and sounds as stimuli, we aimed to study the integration processing of synchronized visual and auditory information in videos of real-world events in the human brain, through the use event-related potentials (ERPs) methods. Experimental results showed that videos of mismatched actions (images) and sounds would elicit a larger P400 as compared to videos of matched actions (images) and sounds. We believe that the P400 waveform might be related to the cognitive integration processing of mismatched multisensory information in the human brain. The results also indicated that synchronized multisensory information would interfere with each other, which would influence the results of the cognitive integration processing.  相似文献   

15.
Ketamine is a selective NMDA glutamate receptor antagonist that disrupts cognitive and behavioral function. Evidence exists that NMDA receptors play a role in lateral cortical connections, suggesting involvement in integrating information across the cortex. To investigate NMDA receptors' role in cortical integration at a perceptual level, psychophysical measures were made of perceptual grouping, which requires global analysis of neural representations of stimulus elements. Rats were trained to discriminate solid lines as well as patterns of dots that could be perceptually grouped into vertical or horizontal stripes. Psychophysical measures determined thresholds of perceptual grouping capacities. Rats receiving maximum subanesthetic doses of Ketamine discriminated solid patterns normally, but were impaired on dot pattern discrimination when greater demands were placed on perceptual grouping. These results demonstrate a selective disruption by Ketamine of visual discrimination that requires perceptual grouping of stimulus patterns. These results also provide evidence associating NMDA receptor-dependent neural mechanisms with context-dependent perceptual function.  相似文献   

16.
People treated for bilateral congenital cataracts offer a model to study the influence of visual deprivation in early infancy on visual and multisensory development. We investigated cross-modal integration capabilities in cataract patients using a simple detection task that provided redundant information to two different senses. In both patients and controls, redundancy gains were consistent with coactivation models, indicating an integrated processing of modality-specific information. This finding is in contrast with recent studies showing impaired higher-level multisensory interactions in cataract patients. The present results suggest that basic cross-modal integrative processes for simple short stimuli do not depend on visual and/or crossmodal input since birth.  相似文献   

17.
This study aimed to identify neural mechanisms that underlie perceptual learning in a visual-discrimination task. We trained two monkeys (Macaca mulatta) to determine the direction of visual motion while we recorded from their middle temporal area (MT), which in trained monkeys represents motion information that is used to solve the task, and lateral intraparietal area (LIP), which represents the transformation of motion information into a saccadic choice. During training, improved behavioral sensitivity to weak motion signals was accompanied by changes in motion-driven responses of neurons in LIP, but not in MT. The time course and magnitude of the changes in LIP correlated with the changes in behavioral sensitivity throughout training. Thus, for this task, perceptual learning does not appear to involve improvements in how sensory information is represented in the brain, but rather how the sensory representation is interpreted to form the decision that guides behavior.  相似文献   

18.
Previous studies using auditory sequences with rapid repetition of tones revealed that spatiotemporal cues and spectral cues are important cues used to fuse or segregate sound streams. However, the perceptual grouping was partially driven by the cognitive processing of the periodicity cues of the long sequence. Here, we investigate whether perceptual groupings (spatiotemporal grouping vs. frequency grouping) could also be applicable to short auditory sequences, where auditory perceptual organization is mainly subserved by lower levels of perceptual processing. To find the answer to that question, we conducted two experiments using an auditory Ternus display. The display was composed of three speakers (A, B and C), with each speaker consecutively emitting one sound consisting of two frames (AB and BC). Experiment 1 manipulated both spatial and temporal factors. We implemented three ‘within-frame intervals’ (WFIs, or intervals between A and B, and between B and C), seven ‘inter-frame intervals’ (IFIs, or intervals between AB and BC) and two different speaker layouts (inter-distance of speakers: near or far). Experiment 2 manipulated the differentiations of frequencies between two auditory frames, in addition to the spatiotemporal cues as in Experiment 1. Listeners were required to make two alternative forced choices (2AFC) to report the perception of a given Ternus display: element motion (auditory apparent motion from sound A to B to C) or group motion (auditory apparent motion from sound ‘AB’ to ‘BC’). The results indicate that the perceptual grouping of short auditory sequences (materialized by the perceptual decisions of the auditory Ternus display) was modulated by temporal and spectral cues, with the latter contributing more to segregating auditory events. Spatial layout plays a less role in perceptual organization. These results could be accounted for by the ‘peripheral channeling’ theory.  相似文献   

19.
Repetition blindness (RB) is a visual deficit, wherein observers fail to perceive the second occurrence of a repeated item in a rapid serial visual presentation stream. Chen and Yeh (Psychon Bull Rev 15:404–408, 2008) recently observed a reduction of the RB effect when the repeated items were accompanied by two sounds. The current study further manipulated the pitch of the two sounds (same versus different) in order to examine whether this cross-modal facilitation effect is caused by the multisensory enhancement of the visual event by sound, or multisensory Gestalt (perceptual grouping) of a new representation formed by combining the visual and auditory inputs. The results showed robust facilitatory effects of sound on RB regardless of the pitch of the sounds (Experiment 1), despite an effort to further increase the difference in pitch (Experiment 2). Experiment 3 revealed a close link between participants’ awareness of pitch and the effect of pitch on the RB effect. We conclude that the facilitatory effect of sound on RB results from multisensory enhancement of the perception of visual events by auditory signals.  相似文献   

20.
Many perceptual cue combination studies have shown that humans can integrate sensory information across modalities as well as within a modality in a manner that is close to optimal. While the limits of sensory cue integration have been extensively studied in the context of perceptual decision tasks, the evidence obtained in the context of motor decisions provides a less consistent picture. Here, we studied the combination of visual and haptic information in the context of human arm movement control. We implemented a pointing task in which human subjects pointed at an invisible unknown target position whose vertical position varied randomly across trials. In each trial, we presented a haptic and a visual cue that provided noisy information about the target position half-way through the reach. We measured pointing accuracy as function of haptic and visual cue onset and compared pointing performance to the predictions of a multisensory decision model. Our model accounts for pointing performance by computing the maximum a posteriori estimate, assuming minimum variance combination of uncertain sensory cues. Synchronicity of cue onset has previously been demonstrated to facilitate the integration of sensory information. We tested this in trials in which visual and haptic information was presented with temporal disparity. We found that for our sensorimotor task temporal disparity between visual and haptic cue had no effect. Sensorimotor learning appears to use all available information and to apply the same near-optimal rules for cue combination that are used by perception.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号