首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
Psychophysical evidence in humans indicates that localization is different for stationary flashed and coherently moving objects. To address how the primary visual cortex represents object position we used a population approach that pools spiking activity of many neurones in cat area 17. In response to flashed stationary squares (0.4 deg) we obtained localized activity distributions in visual field coordinates, which we referred to as profiles across a 'population receptive field' (PRF). We here show how motion trajectories can be derived from activity across the PRF and how the representation of moving and flashed stimuli differs in position. We found that motion was represented by peaks of population activity that followed the stimulus with a speed-dependent lag. However, time-to-peak latencies were shorter by ∼16 ms compared to the population responses to stationary flashes. In addition, motion representation showed a directional bias, as latencies were more reduced for peripheral-to-central motion compared to the opposite direction. We suggest that a moving stimulus provides 'preactivation' that allows more rapid processing than for a single flash event.  相似文献   

2.
A basic function of the visual system is to estimate the location of objects. Among other sensory inputs, the coding of an object's position involves the integration of visual motion, such as that produced by other moving patterns in the scene. Psychophysical evidence has shown that motion signals can shift, in the direction of motion, both the perceived position and the directed action to a stationary object. The neural mechanisms that sustain this effect are generally assumed to be mediated by feedback circuits from the middle temporal area to the primary visual cortex. However, evidence from neural responses is lacking. We used measures of ERPs and Granger causality analysis — a tool to predict the causal connectivity of two brain responses — to unravel the circuit by which motion influences position coding. We found that the motion‐induced hand shift is tightly related to a neural delay: Participants with larger shifts of the pointing location presented slower sensory processing, in terms of longer peak latencies of the primary visual evoked potentials. We further identified early neural activity in the vicinity of the extrastriate cortex as the cause of this delay, which likely reflects the early processing of motion signals in position coding. These results suggest the rapid transfer of visual motion through feedforward circuits as a putative neural substrate in charge of the motion‐induced shift in reaching.  相似文献   

3.
It is now emerging that vision is usually limited by object spacing rather than size. The visual system recognizes an object by detecting and then combining its features. 'Crowding' occurs when objects are too close together and features from several objects are combined into a jumbled percept. Here, we review the explosion of studies on crowding--in grating discrimination, letter and face recognition, visual search, selective attention, and reading--and find a universal principle, the Bouma law. The critical spacing required to prevent crowding is equal for all objects, although the effect is weaker between dissimilar objects. Furthermore, critical spacing at the cortex is independent of object position, and critical spacing at the visual field is proportional to object distance from fixation. The region where object spacing exceeds critical spacing is the 'uncrowded window'. Observers cannot recognize objects outside of this window and its size limits the speed of reading and search.  相似文献   

4.
When two stationary visual objects appear in alternating sequence, they evoke the perception of a single object moving back and forth between them. This is known as stroboscopic or apparent motion and forms the basis of perceived continuity in, for example, motion pictures. When the spatiotemporal separation between the inducing objects is optimal, the subjective appearance of apparent motion is nearly indistinguishable from that of real motion. Here we report that the detection and identification of a simple visual form in the path of apparent motion is impaired by the illusory perception of an object moving through the empty space between the locations at which the inducing objects are presented. This observation may be a manifestation of perceptual completion or 'filling in' during apparent motion perception. We propose that feedback from higher to lower visual cortical areas activates an explicit neural representation of a moving object, which can then disrupt the representation of visual stimuli in the path of the movement.  相似文献   

5.
The responses of visual movement-sensitive neurons in the anterior superior temporal polysensory area (STPa) of monkeys were studied during object-motion, ego-motion and during both together. The majority of the cells responded only to the image of a moving object against a stationary background and failed to respond to the retinal movement of the same object (against the same background) caused by the monkey's ego-motion. All the tested cells continued responding to the object-motion during ego-motion in the opposite direction. By contrast, most cells failed to respond to the motion of an object when the observer and object moved at the same speed and direction (eliminating observer-relative motion cues). The results indicate that STPa cells compute motion relative to the observer and suggest an influence of reference signals (vestibular, somatosensory or retinal) in the discrimination of ego- and object-motion. The results extend observations indicating that STPa cells are selective for visual motion originating from the movements of external objects and unresponsive to retinal changes correlated with the observer's own movements.  相似文献   

6.
The position of a drifting sine-wave grating enveloped by a stationary Gaussian is misperceived in the direction of motion. Previous research indicated that the illusion was larger when observers pointed to the center of the stimulus than when they indicated the stimulus position on a ruler. This conclusion was reexamined. Observers pointed to the position of a small Gabor patch on the screen or compared its position to moving patches, stationary lines, or flashed lines. With moving patches, the illusion was larger with probe than with motor judgments; with stationary lines, the illusion was about the same size; and with flashed lines, the illusion was smaller with probe than with motor judgments. Thus, the comparison between perceptual and motor measures depended strongly on the methods used. Further, the target was mislocalized toward the fovea with motor judgments, whereas the target was displaced away from the fovea relative to line probes.  相似文献   

7.
When both stationary and moving objects are present in the visual field, localizing objects in space may become difficult, as shown by illusory phenomena such as the Fr?hlich effect and the flash-lag effect. Despite the efforts to decipher how motion and position information are combined to form a coherent visual representation, a unitary picture is still lacking. In the flash-lag effect, a flash presented in alignment with a moving stimulus is perceived to lag behind it. We investigated whether this relative spatial localization (i.e., judging the position of the flash relative to that of the moving stimulus) is the result of a linear combination of two absolute localization mechanisms--that is, the coding of the flash position in space and the coding of the position of the moving stimulus in space. In three experiments we showed that (a) the flash is perceived to be shifted in the direction of motion; (b) the moving stimulus is perceived to be ahead of its physical position, the forward shift being larger than that of the flash; (c) the linear combination of these two shifts is quantitatively equivalent to the flash-lag effect, which was measured independently. The results are discussed in relation to perceptual and motor localization mechanisms.  相似文献   

8.
When both stationary and moving objects are present in the visual field, localizing objects in space may become difficult, as shown by illusory phenomena such as the Fröhlich effect and the flash-lag effect. Despite the efforts to decipher how motion and position information are combined to form a coherent visual representation, a unitary picture is still lacking. In the flash-lag effect, a flash presented in alignment with a moving stimulus is perceived to lag behind it. We investigated whether this relative spatial localization (i.e., judging the position of the flash relative to that of the moving stimulus) is the result of a linear combination of two absolute localization mechanisms—that is, the coding of the flash position in space and the coding of the position of the moving stimulus in space. In three experiments we showed that (a) the flash is perceived to be shifted in the direction of motion; (b) the moving stimulus is perceived to be ahead of its physical position, the forward shift being larger than that of the flash; (c) the linear combination of these two shifts is quantitatively equivalent to the flash-lag effect, which was measured independently. The results are discussed in relation to perceptual and motor localization mechanisms.  相似文献   

9.
In stationary flight Drosophila melanogaster produces yaw torque in response to visual movement stimuli. The residual optomotor yaw torque response of the mutant optomotor-blindH31 (omb), which lacks the horizontal (HS) and vertical (VS) giant fibers in the lobula plate, differs from that of wild-type in several aspects: it is restricted to the frontal visual field, it is only elicited by front-to-back motion and appears to be mediated by a different set of elementary movement detectors (EMDs). Using a single black stripe as motion stimulus the torque response is, even in wild-type flies, dominated by the frontal visual field and by front-to-back motion. We thus propose that Drosophila's optomotor yaw control is organized as two partially parallel subunits. The component still displayed by omb is called "object response"; the component missing in the mutant (which is presumably mediated by the giant HS-cells in the wild-type) is called "large field response". Several properties of the object response are described.  相似文献   

10.
It is traditional to believe that neurons in primary visual cortex are sensitive only or principally to stimulation within a spatially restricted receptive field (classical receptive field). It follows from this that they should only be capable of encoding the direction of stimulus movement orthogonal to the local contour, since this is the only information available in their classical receptive field "aperture." This direction is not necessarily the same as the motion of the entire object, as the direction cue within an aperture is ambiguous to the global direction of motion, which can only be derived by integrating with unambiguous components of the object. Recent results, however, show that primary visual cortex neurons can integrate spatially and temporally distributed cues outside the classical receptive field, and so we reexamined whether primary visual cortex neurons suffer the "aperture problem." With the stimulation of an optimally oriented bar drifting across the classical receptive field in different global directions, here we show that a subpopulation of primary visual cortex neurons (25/81) recorded from anesthetized and paralyzed marmosets is capable of integrating informative unambiguous direction cues presented by the bar ends, well outside their classical receptive fields, to encode global motion direction. Although the stimuli within the classical receptive field were identical, their directional responses were significantly modulated according to the global direction of stimulus movement. Hence, some primary visual cortex neurons are not local motion energy filters, but may encode signals that contribute directly to global motion processing.  相似文献   

11.
It is well known that the detection thresholds for stationary auditory and visual signals are lower if the signals are presented bimodally rather than unimodally, provided the signals coincide in time and space. Recent work on auditory–visual motion detection suggests that the facilitation seen for stationary signals is not seen for motion signals. We investigate the conditions under which motion perception also benefits from the integration of auditory and visual signals. We show that the integration of cross-modal local motion signals that are matched in position and speed is consistent with thresholds predicted by a neural summation model. If the signals are presented in different hemi-fields, move in different directions, or both, then behavioural thresholds are predicted by a probability-summation model. We conclude that cross-modal signals have to be co-localised and co-incident for effective motion integration. We also argue that facilitation is only seen if the signals contain all localisation cues that would be produced by physical objects.  相似文献   

12.
Reaching toward a visual target involves the transformation of visual information into appropriate motor commands. Complex movements often occur either while we are moving or when objects in the world move around us, thus changing the spatial relationship between our hand and the space in which we plan to reach. This study investigated whether rotation of a wide field-of-view immersive scene produced by a virtual environment affected online visuomotor control during a double-step reaching task. A total of 20 seated healthy subjects reached for a visual target that remained stationary in space or unpredictably shifted to a second position (either to the right or left of its initial position) with different inter-stimulus intervals. Eleven subjects completed two experiments which were similar except for the duration of the target’s appearance. The final target was either visible throughout the entire trial or only for a period of 200 ms. Movements were performed under two visual field conditions: the virtual scene was matched to the subject’s head motion or rolled about the line of sight counterclockwise at 130°/s. Nine additional subjects completed a third experiment in which the direction of the rolling scene was manipulated (i.e., clockwise and counterclockwise). Our results showed that while all subjects were able to modify their hand trajectory in response to the target shift with both visual scenes, some of the double-step movements contained a pause prior to modifying trajectory direction. Furthermore, our findings indicated that both the timing and kinematic adjustments of the reach were affected by roll motion of the scene. Both planning and execution of the reach were affected by roll motion. Changes in proportion of trajectory types, and significantly longer pauses that occurred during the reach in the presence of roll motion suggest that background roll motion mainly interfered with the ability to update the visuomotor response to the target displacement. Furthermore, the reaching movement was affected differentially by the direction of roll motion. Subjects demonstrated a stronger effect of visual motion on movements taking place in the direction of visual roll (e.g., leftward movements during counterclockwise roll). Further investigation of the hand path revealed significant changes during roll motion for both the area and shape of the 95% tolerance ellipses that were constructed from the hand position following the main movement termination. These changes corresponded with a hand drift that would suggest that subjects were relying more on proprioceptive information to estimate the arm position in space during roll motion of the visual field. We conclude that both the spatial and temporal kinematics of the reach movement were affected by the motion of the visual field, suggesting interference with the ability to simultaneously process two consecutive stimuli.  相似文献   

13.
Adaptation to visual motion can induce marked distortions of the perceived spatial location of subsequently viewed stationary objects. These positional shifts are direction specific and exhibit tuning for the speed of the adapting stimulus. In this study, we sought to establish whether comparable motion-induced distortions of space can be induced in the auditory domain. Using individually measured head related transfer functions (HRTFs) we created auditory stimuli that moved either leftward or rightward in the horizontal plane. Participants adapted to unidirectional auditory motion presented at a range of speeds and then judged the spatial location of a brief stationary test stimulus. All participants displayed direction-dependent and speed-tuned shifts in perceived auditory position relative to a ‘no adaptation’ baseline measure. To permit direct comparison between effects in different sensory domains, measurements of visual motion-induced distortions of perceived position were also made using stimuli equated in positional sensitivity for each participant. Both the overall magnitude of the observed positional shifts, and the nature of their tuning with respect to adaptor speed were similar in each case. A third experiment was carried out where participants adapted to visual motion prior to making auditory position judgements. Similar to the previous experiments, shifts in the direction opposite to that of the adapting motion were observed. These results add to a growing body of evidence suggesting that the neural mechanisms that encode visual and auditory motion are more similar than previously thought.  相似文献   

14.
Segregating a moving object from its visual context is particularly relevant for the control of smooth-pursuit eye movements. We examined the interaction between a moving object and a stationary or moving visual context to determine the role of the context motion signal in driving pursuit. Eye movements were recorded from human observers to a medium-contrast Gaussian dot that moved horizontally at constant velocity. A peripheral context consisted of two vertically oriented sinusoidal gratings, one above and one below the stimulus trajectory, that were either stationary or drifted into the same or opposite direction as that of the target at different velocities. We found that a stationary context impaired pursuit acceleration and velocity and prolonged pursuit latency. A drifting context enhanced pursuit performance, irrespective of its motion direction. This effect was modulated by context contrast and orientation. When a context was briefly perturbed to move faster or slower eye velocity changed accordingly, but only when the context was drifting along with the target. Perturbing a context into the direction orthogonal to target motion evoked a deviation of the eye opposite to the perturbation direction. We therefore provide evidence for the use of absolute and relative motion cues, or motion assimilation and motion contrast, for the control of smooth-pursuit eye movements.  相似文献   

15.
What is the relationship between retinotopy and object selectivity in human lateral occipital (LO) cortex? We used functional magnetic resonance imaging (fMRI) to examine sensitivity to retinal position and category in LO, an object-selective region positioned posterior to MT along the lateral cortical surface. Six subjects participated in phase-encoded retinotopic mapping experiments as well as block-design experiments in which objects from six different categories were presented at six distinct positions in the visual field. We found substantial position modulation in LO using standard nonobject retinotopic mapping stimuli; this modulation extended beyond the boundaries of visual field maps LO-1 and LO-2. Further, LO showed a pronounced lower visual field bias: more LO voxels represented the lower contralateral visual field, and the mean LO response was higher to objects presented below fixation than above fixation. However, eccentricity effects produced by retinotopic mapping stimuli and objects differed. Whereas LO voxels preferred a range of eccentricities lying mostly outside the fovea in the retinotopic mapping experiment, LO responses were strongest to foveally presented objects. Finally, we found a stronger effect of position than category on both the mean LO response, as well as the distributed response across voxels. Overall these results demonstrate that retinal position exhibits strong effects on neural response in LO and indicates that these position effects may be explained by retinotopic organization.  相似文献   

16.
Eye movements are thought to account for a number of visual motion illusions involving stationary objects presented against a featureless background or apparent motion of the whole visual field. We tested two different versions of the eye movement account: (a) the retinal slip explanation and (b) the nystagmus-suppression explanation, in particular their ability to account for visual motion experienced during vibration of the neck muscles, and for the visual motion aftereffect following vibration. We vibrated the neck (ventral sternocleidomastoid muscles, bilaterally, or right dorsal muscles) and measured eye movements in conjunction with perceived illusory displacement of an LED presented in complete darkness (N=10). To test the retinal-slip explanation, we compared the direction of slow eye movements to the direction of illusory motion of the visual target. To test the suppression explanation, we estimated the direction of suppressed slow-phase eye movements and compared it to the direction of illusory motion. Two main findings show that neither actual nor suppressed eye movements cause the illusory motion and motion aftereffect. Firstly, eye movements do not reverse direction when the illusory motion reverses after vibration stops. Secondly, there are large individual differences with regards to the direction of eye movements in observers who all experience a similar visual illusion. We conclude that, rather than eye movements, a more global spatial constancy mechanism that takes into account head movement is responsible for the illusion. The results also argue against the notion of a single central signal that determines both perceptual experience and oculomotor behaviour.  相似文献   

17.
Visual object recognition is computationally difficult because changes in an object's position, distance, pose, or setting may cause it to produce a different retinal image on each encounter. To robustly recognize objects, the primate brain must have mechanisms to compensate for these variations. Although these mechanisms are poorly understood, it is thought that they elaborate neuronal representations in the inferotemporal cortex that are sensitive to object form but substantially invariant to other image variations. This study examines this hypothesis for image variation resulting from changes in object position. We studied the effect of small differences (+/-1.5 degrees ) in the retinal position of small (0.6 degrees wide) visual forms on both the behavior of monkeys trained to identify those forms and the responses of 146 anterior IT (AIT) neurons collected during that behavior. Behavioral accuracy and speed were largely unaffected by these small changes in position. Consistent with previous studies, many AIT responses were highly selective for the forms. However, AIT responses showed far greater sensitivity to retinal position than predicted from their reported receptive field (RF) sizes. The median AIT neuron showed a approximately 60% response decrease between positions within +/-1.5 degrees of the center of gaze, and 52% of neurons were unresponsive to one or more of these positions. Consistent with previous studies, each neuron's rank order of target preferences was largely unaffected across position changes. Although we have not yet determined the conditions necessary to observe this marked position sensitivity in AIT responses, we rule out effects of spatial-frequency content, eye movements, and failures to include the RF center. To reconcile this observation with previous studies, we hypothesize that either AIT position sensitivity strongly depends on object size or that position sensitivity is sharpened by extensive visual experience at fixed retinal positions or by the presence of flanking distractors.  相似文献   

18.
In this study, we have explored whether the impact of visual information on postural reactions is due to the same perceptual mechanisms that produce vection. Pitch motion of the visual field was presented at varying velocities to eight healthy subjects (29.9 ± 2.8 years) standing quietly on a stationary base of support or receiving a 3° toes-up tilt of the base of support. An infrared motion system recorded markers placed on body segments to record angular displacement of head and ankle and calculate whole body center of mass. Onset of the visual field motion and base of support movement were synchronized in all trials. We found that in the first 2 s following onset of visual field motion, both direction and amplitude of the linear displacement of whole body center of mass and angular displacement of the head, hip, and ankle were modulated by the velocity of visual scene motion. When the visual scene rotated in upward pitch, subjects overshot their initial vertical position with amplitudes that increased as velocity of the visual field increased. This behavior was even more evident when the base of support was tilted. These responses were much shorter than those observed in studies of vection. The dependence of the postural response amplitudes on the velocity of the visual field suggests, however, that there might be well-shared control pathways for visual influences on postural reactions and postural sway elicited by an illusion of self-motion.  相似文献   

19.
Primates can generate accurate, smooth eye-movement responses to moving target objects of arbitrary shape and size, even in the presence of complex backgrounds and/or the extraneous motion of non-target objects. Most previous studies of pursuit have simply used a spot moving over a featureless background as the target and have thus neglected critical issues associated with the general problem of recovering object motion. Visual psychophysicists and theoreticians have shown that, for arbitrary objects with multiple features at multiple orientations, object-motion estimation for perception is a complex, multi-staged, time-consuming process. To examine the temporal evolution of the motion signal driving pursuit, we recorded the tracking eye movements of human observers to moving line-figure diamonds. We found that pursuit is initially biased in the direction of the vector average of the motions of the diamond's line segments and gradually converges to the true object-motion direction with a time constant of approximately 90 ms. Furthermore, transient blanking of the target during steady-state pursuit induces a decrease in tracking speed, which, unlike pursuit initiation, is subsequently corrected without an initial direction bias. These results are inconsistent with current models in which pursuit is driven by retinal-slip error correction. They demonstrate that pursuit models must be revised to include a more complete visual afferent pathway, which computes, and to some extent latches on to, an accurate estimate of object direction over the first hundred milliseconds or so of motion.  相似文献   

20.
The responsiveness of neurons in V1 is modulated by stimuli placed outside their classical receptive fields. This nonclassical surround provides input from a larger portion of the visual scene than originally thought, permitting integration of information at early levels in the visual processing stream. Signals from the surround have been reported variously to be suppressive and facilitatory, selective and unselective. We tested the specificity of influences from the surround by studying the interactions between drifting sinusoidal gratings carefully confined to conservatively defined center and surround regions. We found that the surround influence was always suppressive when the surround grating was at the neuron's preferred orientation. Suppression tended to be stronger when the surround grating also moved in the neuron's preferred direction, rather than its opposite. When the orientation in the surround was 90 degrees from the preferred orientation (orthogonal), suppression was weaker, and facilitation was sometimes evident. The tuning of surround signals therefore tended to match the tuning of the center, though the tuning of the surround was somewhat broader. The tuning of suppression also depended on the contrast of the center grating-when the center grating was reduced in contrast, orthogonal surround stimuli became relatively more suppressive. We also found evidence for the tuning of the surround being dependent to some degree on the stimulus used in the center-suppression was often stronger for a given center stimulus when the parameters of the surround grating matched the parameters of the center grating even when the center grating was not itself of the optimal direction or orientation. We also explored the spatial distribution of surround influence and found an orderly relationship between the orientation of grating patches presented to regions of the surround and the position of greatest suppression. When surround gratings were oriented parallel to the preferred orientation of the receptive field, suppression was strongest at the receptive field ends. When surround gratings were orthogonal, suppression was strongest on the flanks. We conclude that the surround has complex effects on responses from the classical receptive field. We suggest that the underlying mechanism of this complexity may involve interactions between relatively simple center and surround mechanisms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号