首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Accurate saccadic and vergence eye movements towards selected visual targets are fundamental to perceive the 3-D environment. Despite this importance, shifts in eye gaze are not always perfect given that they are frequently followed by small corrective eye movements. The oculomotor system receives distinct information from various visual cues that may cause incongruity in the planning of a gaze shift. To test this idea, we analyzed eye movements in humans performing a saccade task in a 3-D setting. We show that saccades and vergence movements towards peripheral targets are guided by monocular (perceptual) cues. Approximately 200 ms after the start of fixation at the perceived target, a fixational saccade corrected the eye positions to the physical target location. Our findings suggest that shifts in eye gaze occur in two phases; a large eye movement toward the perceived target location followed by a corrective saccade that directs the eyes to the physical target location.  相似文献   

2.
Where the eyes fixate during search is not random; rather, gaze reflects the combination of information about the target and the visual input. It is not clear, however, what information about a target is used to bias the underlying neuronal responses. We here engage subjects in a variety of simple conjunction search tasks while tracking their eye movements. We derive a generative model that reproduces these eye movements and calculate the conditional probabilities that observers fixate, given the target, on or near an item in the display sharing a specific feature with the target. We use these probabilities to infer which features were biased by top-down attention: Color seems to be the dominant stimulus dimension for guiding search, followed by object size, and lastly orientation. We use the number of fixations it took to find the target as a measure of task difficulty. We find that only a model that biases multiple feature dimensions in a hierarchical manner can account for the data. Contrary to common assumptions, memory plays almost no role in search performance. Our model can be fit to average data of multiple subjects or to individual subjects. Small variations of a few key parameters account well for the intersubject differences. The model is compatible with neurophysiological findings of V4 and frontal eye fields (FEF) neurons and predicts the gain modulation of these cells.  相似文献   

3.
Experimental data on the accuracy and frequency of saccades are incorporated into a model of the visual world and eye movements to determine the spatial distribution of visual objects on the retina. Visual scenes are represented as sequences of discrete small objects whose positions are initially uniformly distributed and then moved toward the center of the retina by eye movements. We then use this model to investigate whether the distribution of cones in the retina maximizes the information transferred about object position. Assuming for simplicity that a single cone is activated by the object, the rate of information transfer is maximized at the receptor stage if the probability that a target lies at a position on the retina is proportional to the local cone density. Although qualitatively it is easy to understand why the cone density is higher at the fovea, by linking the cone density with eye movements through information sampling theory, we provide an explanation for its quantitative variation across the retina. The human cone distribution and the object distribution in our model visual world are shown to have the same general form and are in close agreement between 5- and 30-deg eccentricity.  相似文献   

4.
Hwang AD  Wang HC  Pomplun M 《Vision research》2011,51(10):1192-1205
The perception of objects in our visual world is influenced by not only their low-level visual features such as shape and color, but also their high-level features such as meaning and semantic relations among them. While it has been shown that low-level features in real-world scenes guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations has not been investigated. Here we study guidance of eye movements by semantic similarity among objects during real-world scene inspection and search. By selecting scenes from the LabelMe object-annotated image database and applying latent semantic analysis (LSA) to the object labels, we generated semantic saliency maps of real-world scenes based on the semantic similarity of scene objects to the currently fixated object or the search target. An ROC analysis of these maps as predictors of subjects’ gaze transitions between objects during scene inspection revealed a preference for transitions to objects that were semantically similar to the currently inspected one. Furthermore, during the course of a scene search, subjects’ eye movements were progressively guided toward objects that were semantically similar to the search target. These findings demonstrate substantial semantic guidance of eye movements in real-world scenes and show its importance for understanding real-world attentional control.  相似文献   

5.
Previous work on transsaccadic memory and change blindness suggests that only a small part of the information in the visual scene is retained following a change in eye position. However, some visual representation across different fixation positions seems necessary to guide body movements. To understand what information is retained across gaze positions, it seems necessary to consider the functional demands of vision in ordinary behavior. We therefore examined eye and hand movements in a naturalistic task, where subjects copied a toy model in a virtual environment. Saccadic targeting performance was examined to see if subjects took advantage of regularities in the environment. During the first trials the spatial arrangement of the pieces used to copy the model was kept stable. In subsequent trials this arrangement was changed randomly every time the subject looked away. Results showed that about 20% of saccades went either directly to the location of the next component to be copied or to its old location before the change. There was also a significant increase in the total number of fixations required to locate a piece after a change, which could be accounted for by the corrective movements required after fixating the (incorrect) old location. These results support the idea that a detailed representation of the spatial structure of the environment is typically retained across fixations and used to guide eye movements.  相似文献   

6.
In normal vision, shifts of attention and gaze are tightly coupled. Here we ask if this coupling affects performance also when central vision is not available. To this aim, we trained normal-sighted participants to perform a visual search task while vision was restricted to a gaze-contingent viewing window ("forced field location") either in the left, right, upper, or lower visual field. Gaze direction was manipulated within a continuous visual search task that required leftward, rightward, upward, or downward eye movements. We found no general performance advantage for a particular part of the visual field or for a specific gaze direction. Rather, performance depended on the coordination of visual attention and eye movements, with impaired performance when sustained attention and gaze have to be moved in opposite directions. Our results suggest that during early stages of central visual field loss, the optimal location for the substitution of foveal vision does not depend on the particular retinal location alone, as has previously been thought, but also on the gaze direction required by the task the patient wishes to perform.  相似文献   

7.
The image information guiding visual behavior is acquired and maintained in an interplay of gaze shifts and visual short-term memory (VSTM). If storage capacity of VSTM is exhausted, gaze shifts can be used to regain information not currently represented in memory. By varying the separation between relevant image regions, S. Inamdar and M. Pomplun (2003) demonstrated a trade-off between VSTM storage and gaze shifts, which were performed as pure eye movements, that is, without a head movement component. Here we extend this paradigm to larger gaze shifts involving both eye and head movements. We use a comparative visual search paradigm with two relevant image regions and region separation as independent variable. Image regions were defined by two cupboards displaying colored geometrical objects in roughly equal arrangements. Subjects were asked to find differences in the arrangement of the objects in the two cupboards. Cupboard separation was varied between 30 degrees and 120 degrees . Images were presented with two projectors on a 150 degrees x 70 degrees curved screen. Head and eye movements were simultaneously recorded with an ART head tracker and an ASL mobile eye tracker, respectively. In the large separation conditions, the number of gaze shifts between the two cupboards was reduced, while fixation duration increased. Furthermore, the head movement proportions negatively correlated with the number of gaze shifts and positively correlated with fixation duration. We conclude that the visual system uses increased VSTM involvement to avoid gaze movements and in particular movements of the head. Scan path analysis revealed two subject-specific strategies (encode left, compare right, and vice versa), which were consistently used in all separation conditions.  相似文献   

8.
Itti L  Koch C 《Vision research》2000,40(10-12):1489-1506
Most models of visual search, whether involving overt eye movements or covert shifts of attention, are based on the concept of a saliency map, that is, an explicit two-dimensional map that encodes the saliency or conspicuity of objects in the visual environment. Competition among neurons in this map gives rise to a single winning location that corresponds to the next attended target. Inhibiting this location automatically allows the system to attend to the next most salient location. We describe a detailed computer implementation of such a scheme, focusing on the problem of combining information across modalities, here orientation, intensity and color information, in a purely stimulus-driven manner. The model is applied to common psychophysical stimuli as well as to a very demanding visual search task. Its successful performance is used to address the extent to which the primate visual system carries out visual search via one or more such saliency maps and how this can be tested.  相似文献   

9.
ABSTRACT The world we see is stable, in spite of retinal image shifts induced by head and eye movements. This is because of complex sensorimotor integrations between the oculomotor and sensory systems: visual, vestibular, and proprioceptive. Eye movements are the response of the motor system to sensory input. They are of two types: those induced by a visual input, and compensatory eye rotations responding to a vestibular signal. Visual input alerts the brain to a feature of interest in the environment, and the motor system acts accordingly to bring the image of the object onto the region of sharpest acuity, or to maintain it there if the eyes are following a moving object. To localize the target correctly, the motor system needs additional information about eye position. It appears that this information is utilized during both saccadic and smooth pursuit eye movements. The source of this input is not known. Primates have demonstrated accurate localization of targets presented visually, after ablation of the striate cortex. The superior colliculus may be involved in localizing a target before the eyes move to fixate it. This structure may also be responsible for distinguishing between real and self-induced stimulus movements. The oculomotor system interacts with the vestibular system to produce compensatory eye movements. The vestibulo-ocular reflex arc (VOR) serves to initiate rotation of the eyes, in the event of a head or body movement, to maintain an unchanged subjective straight-ahead direction. The operation of this reflex is not fully understood. An impairment of any sensory structure involved in ocular motility results in an imbalance between the sensorimotor integration, and consequent abnormal eye movements, sometimes accompanied by erroneous perceptions. Adequate coordination between sensory and motor systems is important for development of reading and writing skills.  相似文献   

10.
While aiming and shooting, we make tiny eye movements called microsaccades that shift gaze between task-relevant objects within a small region of the visual field. However, in the brief period before pressing the trigger, microsaccades are suppressed. This might be due to the lack of a requirement to shift gaze as the retinal images of the two objects begin to overlap on the fovea. Alternatively, we might actively suppress microsaccades to prevent any disturbances in visual perception caused by microsaccades around the time of their occurrence and their subsequent effect on shooting performance. In this study we looked at microsaccade rates while participants performed a simulated shooting task under two conditions: a normal condition in which they moved their eyes freely, and an eccentric condition in which they maintained gaze on a fixed target while performing the shooting task at 5° eccentricity. As expected, microsaccade rate dropped near the end of the task in the normal viewing condition. However, we also found the same decrease for the eccentric condition in which microsaccades did not shift gaze between the task objects. Microsaccades are also produced in response to shifts in covert attention. To test whether disengagement of covert attention from the eccentric shooting location caused the drop in microsaccade rate, we monitored the location of participants’ spatial attention by using a Rapid Serial Visual Presentation (RSVP) task simultaneously at a location opposite to the shooting task. Target letter detection at the RSVP location did not improve during the drop in microsaccade rate, suggesting that covert attention was maintained at the shooting task location. We conclude that in addition to their usual gaze-shifting function, microsaccades during fine-acuity tasks might be modulated by cognitive processes other than spatial attention.  相似文献   

11.
To gain insight into how vision guides eye movements, monkeys were trained to make a single saccade to a specified target stimulus during feature and conjunction search with stimuli discriminated by color and shape. Monkeys performed both tasks at levels well above chance. The latencies of saccades to the target in conjunction search exhibited shallow positive slopes as a function of set size, comparable to slopes of reaction time of humans during target present/absent judgments, but significantly different than the slopes in feature search. Properties of the selection process were revealed by the occasional saccades to distractors. During feature search, errant saccades were directed more often to a distractor near the target than to a distractor at any other location. In contrast, during conjunction search, saccades to distractors were guided more by similarity than proximity to the target; monkeys were significantly more likely to shift gaze to a distractor that had one of the target features than to a distractor that had none. Overall, color and shape information were used to similar degrees in the search for the conjunction target. However, in single sessions we observed an increased tendency of saccades to a distractor that had been the target in the previous experimental session. The establishment of this tendency across sessions at least a day apart and its persistence throughout a session distinguish this phenomenon from the short-term (<10 trials) perceptual priming observed in this and earlier studies using feature visual search. Our findings support the hypothesis that the target in at least some conjunction visual searches can be detected efficiently based on visual similarity, most likely through parallel processing of the individual features that define the stimuli. These observations guide the interpretation of neurophysiological data and constrain the development of computational models.  相似文献   

12.
PURPOSE: To analyze the slow eye movements that shift the direction of gaze in patients with ataxia-telangiectasia (A-T). METHODS: Eye and head movements were recorded with search coils in three patients with A-T during attempted gaze shifts, both with the head immobilized and free to move. RESULTS: Gaze shifts frequently included both saccadic and slow components. The slow movements were recorded after 42% of saccades and had an average peak velocity of 6.1 deg/sec and a mean amplitude of 2.0. They occurred with the head stationary and moving, could be directed centripetally or centrifugally, had velocity waveforms that were relatively linear or exponential, and always moved the eyes toward the visual target. CONCLUSIONS: The slow movements appear to differ from pursuit and vestibular eye movements and are not fully explained by the various types of abnormal eye movements that can follow saccades, such as gaze-evoked nystagmus or postsaccadic drift. Their origin is uncertain, but they could represent very slow saccades, due to aberrant inhibition of burst cell activity during the saccade.  相似文献   

13.
During the course of previous recordings of visually-triggered gaze shifts in the head-unrestrained cat, we occasionally observed small head movements which preceded the initiation of the saccadic eye/head gaze shift toward a visual target. These early head movements (EHMs) were directed toward the target and occurred with a probability varying between animals from 0.4% to 16.4% (mean=5.2%, n=11 animals). The amplitude of EHM ranged from 0.4 degrees to 8.3 degrees (mean=1.9 degrees ), their latency from 66 to 270 ms (median=133 ms) and the delay from EHM onset to gaze shift onset averaged 183+/-108 ms (n=240). Their occurrence did not depend on visual target eccentricity in the studied range (7-35 degrees ), but influenced the metrics and dynamics of the ensuing gaze shifts (gain and velocity reduced). We also found in the two tested cats that low intensity microstimulation of the superior colliculus deeper layers elicited a head movement preceding the gaze shift. Altogether, these results suggest that the presentation of a visual target can elicit a head movement without triggering a saccadic eye/head gaze shift. The visuomotor pathways triggering these early head movements can involve the deep superior colliculus.  相似文献   

14.
Reach and grasp movements are a fundamental part of our daily interactions with the environment. This spatially-guided behavior is often directed to memorized objects because of intervening eye movements that caused them to disappear from sight. How does the brain store and maintain the spatial representations of objects for future reach and grasp movements? We had subjects (n = 8) make reach and two-digit grasp movements to memorized objects, briefly presented before an intervening saccade. Grasp errors, characterizing the spatial representation of object orientation, depended on current gaze position, with and without intervening saccade. This suggests that the orientation information of the object is coded and updated relative to gaze during intervening saccades, and that the grasp errors arose after the updating stage, during the later transformations involved in grasping. The pattern of reach errors also revealed a gaze-centered updating of object location, consistent with previous literature on updating of single-point targets. Furthermore, grasp and reach errors correlated strongly, but their relationship had a non-unity slope, which may suggest that the gaze-centered spatial updates were made in separate channels. Finally, the errors of the two digits were strongly correlated, supporting the notion that these were not controlled independently to form the grip in these experimental conditions. Taken together, our results suggest that the visuomotor system dynamically represents the short-term memory of location and orientation information for reach-and-grasp movements.  相似文献   

15.
Visual search can simply be defined as the task of looking for objects of interest in cluttered visual environments. Typically, the human visual system succeeds at this by making a series of rapid eye movements called saccades, interleaved by discrete fixations. However, very little is known on how the brain programs saccades and selects fixation loci in such naturalistic tasks. In the current study, we use a technique developed in our laboratory based on reverse-correlation1 and stimuli that emulate the natural visual environment to examine observers’ strategies when seeking low-contrast targets of various spatial frequency and orientation characteristics. We present four major findings. First, we provide strong evidence of visual guidance in saccadic targeting characterized by saccadic selectivity for spatial frequencies and orientations close to that of the search target. Second, we show that observers exhibit inaccuracies and biases in their estimates of target features. Third, a complementarity effect is generally observed: the absence of certain frequency components in distracters affects whether they are fixated or mistakenly selected as the target. Finally, an unusual phenomenon is observed whereby distracters containing close-to-vertical structures are fixated in searches for nonvertically oriented targets. Our results provide evidence for the involvement of band-pass mechanisms along feature dimensions (spatial frequency and orientation) during visual search.  相似文献   

16.
Previous results suggest that the brain predominantly relies on a constantly updated gaze-centered target representation to guide reach movements when no other visual information is available. In the present study, we investigated whether the addition of reliable visual landmarks influences the use of spatial reference frames for immediate and delayed reaching. Subjects reached immediately or after a delay of 8 or 12 s to remembered target locations, either with or without landmarks. After target presentation and before reaching they shifted gaze to one of five different fixation points and held their gaze at this location until the end of the reach. With landmarks present, gaze-dependent reaching errors were smaller and more precise than when reaching without landmarks. Delay influenced neither reaching errors nor variability. These findings suggest that when landmarks are available, the brain seems to still use gaze-dependent representations but combine them with gaze-independent allocentric information to guide immediate or delayed reach movements to visual targets.  相似文献   

17.
R V Abadi  C J Scallan 《Vision research》2001,41(22):2895-2907
Many normal individuals show ocular oscillations on eccentric gaze. This study was designed to investigate the effect of visual disengagement and visual feedback on the nature of these end point oscillations. Three test conditions were examined: target present, target absent and when the target position was determined by the subject's eye position via a variable feedback control system. Feedback gains (i.e. target velocity/eye velocity) ranged from 0, where the target position was decoupled from the subject's eye movements (i.e. the target is stationary on the screen), to +1.0 where the retinal image was stabilised (i.e. the target is driven by the subject's eye movements). Only subjects who exhibited sustained end-point oscillations with no latency were included in the study (n=6). Seven different oscillations including square-wave jerks were recorded in the abducting eye during eccentric gaze of a stationary target. The three most common oscillations were the jerk oscillations, with decelerating, linear or pendular slow phases. A number of additional previously unreported waveforms were also recorded. On removal of the target, the mean drift velocity of the slow phase was greatly reduced. The response to the introduction of a change in the visual feedback was specific to each subject, although in all cases, the end-point oscillations generally were of a lower velocity, and gaze was shifted by up to 8 deg in the direction of the slow phase within the first two seconds. The important role of slow eye movement control for maintaining gaze holding is discussed.  相似文献   

18.
The choice of where to look in a visual scene depends on visual processing of information from potential target locations. We examined to what extent the sampling window, or filter, underlying saccadic eye movements is under flexible control and adjusted to the behavioural task demands. Observers performed a contrast discrimination task with systematic variations in the spatial scale and location of the visual signals: small (sigma=0.175 degrees ) or large (sigma=0.8 degrees ) Gaussian signals were presented 4.5 degrees , 6 degrees , or 9 degrees away from central fixation. In experiment 1, we measured the accuracy of the first saccade as a function of target contrast. The efficiency of saccadic targeting decreased with increases in both scale and eccentricity. In experiment 2, the filter underlying saccadic targeting was estimated with the classification image method. We found that the filter (1) had a center-surround organisation, even though the signal was Gaussian; (2) was much too small for the large scale items; (3) remained constant up to the largest measured eccentricity of 9 degrees . The filter underlying the decision of where to look is not fixed, and can be adjusted to the task demands. However, there are clear limits to this flexibility. These limits reflect the coding of visual information by early mechanisms, and the extent to which the neural circuitry involved in programming saccadic eye movements is able to appropriately weigh and combine the outputs from these mechanisms.  相似文献   

19.
Visual suppression of low-spatial frequency information during eye movements is believed to contribute to a stable perception of our visual environment. While visual perception has been studied extensively during saccades, vergence has been somewhat neglected. Here, we show that convergence eye movements reduce contrast sensitivity to low spatial frequency information around the onset of the eye movements, but do not affect sensitivity to higher spatial frequencies. This suggests that visual suppression elicited by convergence eye movements may have the same temporal and spatial characteristics as saccadic suppression.  相似文献   

20.
People can be shown to use memorized location information to move their hand to a target location if no visual information is available. However, for several reasons, memorized information may be imprecise and inaccurate. Here, we study whether and to what extent humans use the remembered location of an object to plan reaching movements when the target is visible. Subjects sequentially picked up and moved two different virtual, "magnetic" target objects from a target region into a virtual trash bin with their index fingers. In one third of the trials, we perturbed the position of the second target by 1 cm while the finger was transporting the first target to the trash. Subjects never noticed this. Although the second target was visible in the periphery, subjects' movements were biased to its initial (remembered) position. The first part of subjects' movements was predictable from a weighted sum of the visible and remembered target positions. For high contrast targets, subjects initially weigh visual and remembered information about target position in an average ratio of 0.67 to 0.33. Over the course of the movement, weight given to memory decreased. Diminishing the contrast of the targets substantially increased the weight that subjects gave to the remembered location. Thus, even when peripheral visual information is available, humans use the remembered location of an object to plan goal-directed movements. In contrast to previous suggestions in the literature, our results indicate that absolute location is remembered quite well.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号