首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The retinal image of an object does not contain information about its actual size. Size must instead be inferred from extraretinal cues for which distance information makes an essential contribution. Asynchronies in the arrival time across visual and auditory sensory components of an audiovisual event can reliably cue its distance, although this cue has been largely neglected in vision research. Here we demonstrate that audio-visual asynchronies can produce a shift in the apparent size of an object and attribute this shift to a change in perceived distance. In the present study participants were asked to match the perceived size of a test circle paired with an asynchronous sound to a variable-size probe circle paired with a simultaneous sound. The perceived size of the circle increased when the sound followed its onset with delays up to around 100 ms. For longer sound delays and sound leads, no effect was seen. We attribute this selective modulation in perceived visual size to audiovisual timing influences on the intrinsic relationship between size and distance. This previously unsuspected cue to distance reveals a surprisingly interactive system using multisensory information for size/distance perception.  相似文献   

2.
A factor that is often not considered in multisensory research is the distance from which information is presented. Interestingly, various studies have shown that the distance at which information is presented can modulate the strength of multisensory interactions. In addition, our everyday multisensory experience in near and far space is rather asymmetrical in terms of retinal image size and stimulus intensity. This asymmetry is the result of the relation between the stimulus-observer distance and its retinal image size and intensity: an object that is further away is generally smaller on the retina as compared to the same object when it is presented nearer. Similarly, auditory intensity decreases as the distance from the observer increases. We investigated how each of these factors alone, and their combination, affected audiovisual integration. Unimodal and bimodal stimuli were presented in near and far space, with and without controlling for distance-dependent changes in retinal image size and intensity. Audiovisual integration was enhanced for stimuli that were presented in far space as compared to near space, but only when the stimuli were not corrected for visual angle and intensity. The same decrease in intensity and retinal size in near space did not enhance audiovisual integration, indicating that these results cannot be explained by changes in stimulus efficacy or an increase in distance alone, but rather by an interaction between these factors. The results are discussed in the context of multisensory experience and spatial uncertainty, and underline the importance of studying multisensory integration in the depth space.  相似文献   

3.
Two objects that project the same visual angle on the retina can appear to occupy very different proportions of the visual field if they are perceived to be at different distances. What happens to the retinotopic map in primary visual cortex (V1) during the perception of these size illusions? Here we show, using functional magnetic resonance imaging (fMRI), that the retinotopic representation of an object changes in accordance with its perceived angular size. A distant object that appears to occupy a larger portion of the visual field activates a larger area in V1 than an object of equal angular size that is perceived to be closer and smaller. These results demonstrate that the retinal size of an object and the depth information in a scene are combined early in the human visual system.  相似文献   

4.
We examined the effect of temporal context on discrimination of intervals marked by auditory, visual and tactile stimuli. Subjects were asked to compare the duration of the interval immediately preceded by an irrelevant “distractor” stimulus with an interval with no distractor. For short interval durations, the presence of the distractor affected greatly the apparent duration of the test stimulus: short distractors caused the test interval to appear shorter and vice versa. For very short reference durations (≤100 ms), the contextual effects were large, changing perceived duration by up to a factor of two. The effect of distractors reduced steadily for longer reference durations, to zero effect for durations greater than 500 ms. We found similar results for intervals defined by visual flashes, auditory tones and brief finger vibrations, all falling to zero effect at 500 ms. Under appropriate conditions, there were strong cross-modal interactions, particularly from audition to vision. We also measured the Weber fractions for duration discrimination and showed that under the conditions of this experiment, Weber fractions decreased steadily with duration, following a square-root law, similarly for all three modalities. The magnitude of the effect of the distractors on apparent duration correlated well with Weber fraction, showing that when duration discrimination was relatively more precise, the context dependency was less. The results were well fit by a simple Bayesian model combining noisy estimates of duration with the action of a resonance-like mechanism that tended to regularize the sound sequence intervals.  相似文献   

5.
The temporal integration of stimuli in different sensory modalities plays a crucial role in multisensory processing. Previous studies using temporal-order judgments to determine the point of subjective simultaneity (PSS) with multisensory stimulation yielded conflicting results on modality-specific delays. While it is known that the relative stimulus intensities of stimuli from different sensory modalities affect their perceived temporal order, we have hypothesized that some of these discrepancies might be explained by a previously overlooked confounding factor, namely the duration of the stimulus. We therefore studied the influence of both factors on the PSS in a spatial-audiovisual temporal-order task. In addition to confirming previous results on the role of stimulus intensity, we report that varying the temporal duration of an audiovisual stimulus pair also affects the perceived temporal order of the auditory and visual stimulus components. Although individual PSS values varied from negative to positive values across participants, we found a systematic shift of PSS values in all participants toward a common attractor value with increasing stimulus duration. This resulted in a stabilization of PSS values with increasing stimulus duration, indicative of a mechanism that compensates individual imbalances between sensory modalities, which might arise from attentional biases toward one modality at short stimulus durations.  相似文献   

6.
Perceptual objects often comprise a visual and auditory signature that arrives simultaneously through distinct sensory channels, and cross-modal features are linked by virtue of being attributed to a specific object. Continued exposure to cross-modal events sets up expectations about what a given object most likely "sounds" like, and vice versa, thereby facilitating object detection and recognition. The binding of familiar auditory and visual signatures is referred to as semantic, multisensory integration. Whereas integration of semantically related cross-modal features is behaviorally advantageous, situations of sensory dominance of one modality at the expense of another impair performance. In the present study, magnetoencephalography recordings of semantically related cross-modal and unimodal stimuli captured the spatiotemporal patterns underlying multisensory processing at multiple stages. At early stages, 100 ms after stimulus onset, posterior parietal brain regions responded preferentially to cross-modal stimuli irrespective of task instructions or the degree of semantic relatedness between the auditory and visual components. As participants were required to classify cross-modal stimuli into semantic categories, activity in superior temporal and posterior cingulate cortices increased between 200 and 400 ms. As task instructions changed to incorporate cross-modal conflict, a process whereby auditory and visual components of cross-modal stimuli were compared to estimate their degree of congruence, multisensory processes were captured in parahippocampal, dorsomedial, and orbitofrontal cortices 100 and 400 ms after stimulus onset. Our results suggest that multisensory facilitation is associated with posterior parietal activity as early as 100 ms after stimulus onset. However, as participants are required to evaluate cross-modal stimuli based on their semantic category or their degree of congruence, multisensory processes extend in cingulate, temporal, and prefrontal cortices.  相似文献   

7.
To perceive the relative positions of objects in the visual field, the visual system must assign locations to each stimulus. This assignment is determined by the object's retinal position, the direction of gaze, eye movements, and the motion of the object itself. Here we show that perceived location is also influenced by motion signals that originate in distant regions of the visual field. When a pair of stationary lines are flashed, straddling but not overlapping a rotating radial grating, the lines appear displaced in a direction consistent with that of the grating's motion, even when the lines are a substantial distance from the grating. The results indicate that motion's influence on position is not restricted to the moving object itself, and that even the positions of stationary objects are coded by mechanisms that receive input from motion-sensitive neurons.  相似文献   

8.
9.
10.
Passing objects from one hand to the other occurs frequently in our daily life. What kind of information about the weight of the object is transferred between the holding and lifting hand? To examine this, we asked people to hold (and heft) an object in one hand and then pick it up with the other. The objects were presented in the context of a size–weight illusion: that is, two objects of different sizes but the same weight were used. One group of participants held one of the objects in their left hand and then picked it up with their right. Another group of participants simply picked up the objects from a table. Thus, the former group had on-line information about the weight of the object, whereas the latter did not. Both groups showed a strong and equivalent size–weight illusion throughout the experiment. At the same time, the group that lifted the objects from the hefting hand applied equal grip force to the small and large object right from the start; in contrast, the group lifting the objects from the table, initially applied more grip force to the large than to the small object before eventually applying the same force to both. In two additional groups, a delay period was imposed between the lifting of the first and the second hands. The force parameters employed by these last two groups were virtually identical to those used by the group that lifted the object directly from the other hand. These results suggest that the initial calibration of grip force uses veridical information about the weight of the object provided by the other hand. This veridical information about weight is available on-line and is retained in memory for later access. The perceived weight of the object is basically ignored in forming grasping forces.  相似文献   

11.
Summary The present study tested the theory that inferotemporal cortex integrates 1) distance information transmitted via superior colliculus-pulvinar afferents, with 2) form information transmitted via striate-prestriate cortex afferents (Gross, 1973a, 1973b). Monkeys were trained to choose the larger of two objects, independent of distance, to obtain a reward. Based on the integration theory, the following predictions concerning this size constancy discrimination were made: 1) monkeys with pulvinar lesions, unable to code distance, should be impaired and adopt strategies based on retinal image size; and 2) monkeys with prestriate lesions, unable to code retinal image size, should be impaired and adopt strategies based on distance. Contrary to these predictions, pulvinar lesions produced no deficit; and although prestriate lesions did produce an impairment, it was due to a failure to code distance in assessing the true size of the object. Thus, monkeys with prestriate lesions consistently responded to retinal image size instead of object size. Replicating an earlier report (Humphrey and Weiskrantz, 1969), inferotemporal lesions also produced an impairment; however, errors made by monkeys with inferotemporal lesions were random and could not be attributed to any consistent strategy. All monkeys reacquired the discrimination postoperatively, indicating that there are multiple mechanisms available to the brain-damaged animal for the perception of size constancy.  相似文献   

12.
Normally we experience the visual world as stable. Ambiguous figures provide a fascinating exception: On prolonged inspection, the “Necker cube” undergoes a sudden, unavoidable reversal of its perceived front‐back orientation. What happens in the brain when spontaneously switching between these equally likely interpretations? Does neural processing differ between an endogenously perceived reversal of a physically unchanged ambiguous stimulus and an exogenously caused reversal of an unambiguous stimulus? A refined EEG paradigm to measure such endogenous events uncovered an early electrophysiological correlate of this spontaneous reversal, a negativity beginning at 160 ms. Comparing across nine electrode locations suggests that this component originates in early visual areas. An EEG component of similar shape and scalp distribution, but 50 ms earlier, was evoked by an external reversal of unambiguous figures. Perceptual disambiguation seems to be accomplished by the same structures that represent objects per se, and to occur early in the visual stream. This suggests that low‐level mechanisms play a crucial role in resolving perceptual ambiguity.  相似文献   

13.
In a focused attention paradigm, saccadic reaction time (SRT) to a visual target tends to be shorter when an auditory accessory stimulus is presented in close temporal and spatial proximity. Observed SRT reductions typically diminish as spatial disparity between the stimuli increases. Here a visual target LED (500 ms duration) was presented above or below the fixation point and a simultaneously presented auditory accessory (2 ms duration) could appear at the same or the opposite vertical position. SRT enhancement was about 35 ms in the coincident and 10 ms in the disparate condition. In order to further probe the audiovisual integration mechanism, in addition to the auditory non-target an auditory masker (200 ms duration) was presented before, simultaneous to, or after the accessory stimulus. In all interstimulus interval (ISI) conditions, SRT enhancement went down both in the coincident and disparate configuration, but this decrement was fairly stable across the ISI values. If multisensory integration solely relied on a feed-forward process, one would expect a monotonic decrease of the masker effect with increasing ISI in the backward masking condition. It is therefore conceivable that the relatively high-energetic masker causes a broad excitatory response of SC neurons. During this state, the spatial audio-visual information from multisensory association areas is fed back and merged with the spatially unspecific excitation pattern induced by the masker. Assuming that a certain threshold of activation has to be achieved in order to generate a saccade in the correct direction, the blurred joint output of noise and spatial audio-visual information needs more time to reach this threshold prolonging SRT to an audio-visual object.  相似文献   

14.
The integration of visual and auditory inputs in the human brain occurs only if the components are perceived in temporal proximity, that is, when the intermodal time difference falls within the so-called subjective synchrony range. We used the midpoint of this range to estimate the point of subjective simultaneity (PSS). We measured the PSS for audio-visual (AV) stimuli in a synchrony judgment task, in which subjects had to judge a given AV stimulus using three response categories (audio first, synchronous, video first). The relevant stimulus manipulation was the duration of the auditory and visual components. Results for unimodal auditory and visual stimuli have shown that the perceived onset shifts to relatively later positions with increasing stimulus duration. These unimodal shifts should be reflected in changing PSS values, when AV stimuli with different durations of the auditory and visual components are used. The results for 17 subjects showed indeed a significant shift of the PSS for different duration combinations of the stimulus components. Because the shifts were approximately equal for duration changes in either of the components, no net shift of the PSS was observed as long as the durations of the two components were equal. This result indicates the need to appropriately account for unimodal timing effects when quantifying intermodal synchrony perception.  相似文献   

15.
Perceptual judgement,grasp point selection and object symmetry   总被引:2,自引:2,他引:0  
Object symmetry is a visual attribute that may contribute to perceptual judgement and to action. We evaluated the effects of varying the physical symmetry of planar objects (presence versus absence) on both aspects. In Experiment 1, subjects estimated the magnitude of visually perceived symmetry of the objects. The results confirmed the influence of physical symmetry on perceived symmetry, and supported our binary categorisation of stimulus objects in terms of presence versus absence of physical symmetry. In Experiment 2, participants used a precision grip to grasp and stably lift the same planar objects varying in degree of symmetry. Choice of grasp points was unrestricted. Participants selected a grasp axis (between thumb and middle finger) that limited the perpendicular distance from CM (i.e., grasp-axis error) to just a few millimetres. Moreover, they took advantage of visual cues to object symmetry to better determine CM, thus reducing their grasp-axis error for symmetric (vs. asymmetric) objects by 31%. We interpret these findings in terms of user and object-geometry constraints on grasp-point selection.  相似文献   

16.
An afterimage looks larger when one fixates on a distant than on a closer surface. We show that the retinotopic activity in the primary visual cortex (V1) associated with viewing an afterimage is modulated by perceived size, even when the size of the retinal image remains constant. This suggests that V1 has an important role in size constancy when the viewing distance of the stimulus changes.  相似文献   

17.
Simultaneity is important in cross-modal information processing. However, it is still unclear how simultaneity is perceived between different sensory modalities. Various factors such as spatial location or attention are known to affect simultaneity judgments. In the present study, we focused on the simultaneity judgments of dynamic events, and investigated what kinds of dynamic properties affect these judgments. We presented the deformation of a virtual object in vision and haptics with various stimulus onset asynchronies. Participants judged whether the deformation occurred simultaneously. We measured the effects of duration, velocity, and the amount of deformation on the visual-haptic simultaneity judgments. The results showed that the point of subjective simultaneity changed depending on the duration of deformation. For a shorter duration (400 ms), the visual deformation needed to precede the haptic one to be judged as simultaneous, while for a longer duration (800 ms, 1 200 ms), the asymmetry was diminished, suggesting that information relevant to the duration of the event was used for the vision-haptics simultaneity judgments of dynamic events.  相似文献   

18.
Converging evidence indicates that the medial temporal lobe participates not only in memory but also in visual object processing. We investigated hippocampal contributions to visual object identification by recording event-related potentials directly from within the hippocampus during a visual object identification task with spatially filtered pictures of real objects presented at different levels of filtering. Hippocampal responses differentiated between identified and unidentified visual objects within a time window of 200-900 ms after stimulus presentation: identified objects elicited a small negative component peaking around 300 ms (hippocampal-N300) and a large positive component, around 650 ms (hippocampal-P600), while the N300 was increased and the P600 was reduced in amplitude in response to unidentified objects. These findings demonstrate that the hippocampus proper contributes to the identification of visual objects discriminating from the very early between identified and unidentified meaningful visual objects.  相似文献   

19.
Numerous studies have suggested that the CNV (contingent negative variation), a negative slow wave developing between a warning and an imperative stimulus, reflects, among other things, temporal processing of the interval between these two stimuli. One aim of the present work was to specify the relationship between CNV activity and the perceived duration. A second aim was to establish if this relationship is the same over the left and right hemispheres. Event-related potentials (ERPs) were recorded for 12 subjects performing a matching-to-sample task in which they had to determine if the duration of a tone (490 ms, 595 ms, 700 ms, 805 ms, and 910 ms) matched that of a previously presented standard (700 ms). CNV activity measured at the FCZ electrode was shown to increase until the standard duration had elapsed. By contrast, right frontal activity increased until the end of the current test duration, even when the standard duration had elapsed. Moreover, for long test durations (805 ms and 910 ms), correlations were observed between CNV peak latency and subjective standard, over left and medial frontal sites. We propose that left and medial frontal activity reflects an accumulation of temporal information that stops once the memorized standard duration is over, while right frontal activity subserves anticipatory attention near the end of the stimulus.  相似文献   

20.
The acoustic startle (ASR) is a transient motor response to an unexpected, intensive stimulus. The response is determined by stimulus parameters such as its intensity, rise time and duration. The dependence of the ASR on the stimulus duration is more complex than could be assumed from physical properties of acoustic pulse. This effect attracted the attention of few researchers. Some authors reported noticeable changes in the ASR amplitude only for very short (less than 4-6 ms) acoustic pulses. The systematic studies on the effect, however, have not been performed so far. The purpose of this study was to determine to what extent the ASR parameters are affected by the durations of the short stimulus. The amplitude of the acoustic startle reflex was assessed for a fixed tonal frequency (6.9 kHz), and for a variety of stimulus durations ranging between 2 and 10 ms. ASRs were studied in 11 adult, hooded rats exposed to a sequence of tone pulses (110 dB SPL) of different durations, presented in random order, with or without 70 dB white noise as a background. Statistical analysis revealed significant differences between ASR amplitudes for different durations. The startle amplitude increased with acoustic pulse duration and distinguishable differences were seen for stimulus duration between 2 and 8 ms. Further increase of pulse duration had no effect on ASR amplitude. The same pattern of changes was observed when the acoustic stimulus was presented with the white noise. In the tested range of stimulus duration no significant differences in the ASR latency were found. The observed differences may be attributed to changes of stimulus acoustic energy and to physiological characteristic of auditory system in the rat.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号