首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Our knowledge about objects in our environment reflects an integration of current visual input with information from preceding gaze fixations. Such a mechanism may reduce uncertainty but requires the visual system to determine which information obtained in different fixations should be combined or kept separate. To investigate the basis of this decision, we conducted three experiments. Participants viewed a stimulus in their peripheral vision and then made a saccade that shifted the object into the opposite hemifield. During the saccade, the object underwent changes of varying magnitude in two feature dimensions (Experiment 1, color and location; Experiments 2 and 3, color and orientation). Participants reported whether they detected any change and estimated one of the postsaccadic features. Integration of presaccadic with postsaccadic input was observed as a bias in estimates toward the presaccadic feature value. In all experiments, presaccadic bias weakened as the magnitude of the transsaccadic change in the estimated feature increased. Changes in the other feature, despite having a similar probability of detection, had no effect on integration. Results were quantitatively captured by an observer model where the decision whether to integrate information from sequential fixations is made independently for each feature and coupled to awareness of a feature change.  相似文献   

2.
Numerous studies have demonstrated that visuospatial attention is a requirement for successful working memory encoding. It is unknown, however, whether this established relationship manifests in consistent gaze dynamics as people orient their visuospatial attention toward an encoding target when searching for information in naturalistic environments. To test this hypothesis, participants'' eye movements were recorded while they searched for and encoded objects in a virtual apartment (Experiment 1). We decomposed gaze into 61 features that capture gaze dynamics and a trained sliding window logistic regression model that has potential for use in real-time systems to predict when participants found target objects for working memory encoding. A model trained on group data successfully predicted when people oriented to a target for encoding for the trained task (Experiment 1) and for a novel task (Experiment 2), where a new set of participants found objects and encoded an associated nonword in a cluttered virtual kitchen. Six of these features were predictive of target orienting for encoding, even during the novel task, including decreased distances between subsequent fixation/saccade events, increased fixation probabilities, and slower saccade decelerations before encoding. This suggests that as people orient toward a target to encode new information at the end of search, they decrease task-irrelevant, exploratory sampling behaviors. This behavior was common across the two studies. Together, this research demonstrates how gaze dynamics can be used to capture target orienting for working memory encoding and has implications for real-world use in technology and special populations.  相似文献   

3.
Trans-saccadic memory consists of keeping track of objects’ locations and features across saccades; pre-saccadic information is remembered and compared with post-saccadic information. It has been shown to have limited resources and involve attention with respect to the selection of objects and features. In support, a previous study showed that recognition of distinct post-saccadic objects in the visual scene is impaired when pre-saccadic objects are relevant and thus already encoded in memory (Poth, Herwig, Schneider, 2015). Here, we investigated the inverse (i.e. how the memory of pre-saccadic objects is affected by abrupt but irrelevant changes in the post-saccadic visual scene). We also modulated the amount of attention to the relevant pre-saccadic object by having participants either make a saccade to it or elsewhere and observed that pre-saccadic attentional facilitation affected how much post-saccadic changes disrupted trans-saccadic memory of pre-saccadic objects.Participants identified a flashed symbol (d, b, p, or q, among distracters), at one of six placeholders (figures “8”) arranged in circle around fixation while planning a saccade to one of them. They reported the identity of the symbol after the saccade. We changed the post-saccadic scene in Experiment one by removing the entire scene, only the placeholder where the pre-saccadic symbol was presented, or all other placeholders except this one. We observed reduced identification performance when only the saccade-target placeholder disappeared after the saccade. In Experiment two, we changed one placeholder location (inward/outward shift or rotation re. saccade vector) after the saccade and observed that identification performance decreased with increased shift/rotation of the saccade-target placeholder. We conclude that pre-saccadic memory is disrupted by abrupt attention-capturing post-saccadic changes of visual scene, particularly when these changes involve the object prioritized by being the goal of a saccade. These findings support the notion that limited trans-saccadic memory resources are disrupted when object correspondence at saccadic goal is broken through removal or location change.  相似文献   

4.
The cumulative probability of target discovery during search has been related experimentally to the relevant “conspicuity area”, the visual field in which the target can be discovered after a single eye fixation. During search, “non-targets” were found to be fixated spontaneously in proportion to their conspicuity area.Further small spontaneous eye fluctuations are described that occurred, during determination of the conspicuity areas, in the direction of the target discovered. Their occurrence and delay depended on the target eccentricity and the size of the conspicuity area.The results emphasize the relevance of the conspicuity area to research on visual selection.  相似文献   

5.
Both monetary and notional rewards are important to motivate individuals to prioritize specific items in visual working memory (VWM). However, whether the reward method and task difficulty are the key factors that modulate the reward boosts in VWM is unclear. In this study, we designed two experiments to explore this question. Experiment 1 examined whether the reward method modulates reward boosts in VWM by manipulating the item type (high reward, low reward, equal reward) and reward method (monetary and notional). Experiment 2 examined whether task difficulty modulates reward boosts in VWM by manipulating the number of high-reward items (1, 2, 3), reward method, and item type. The results indicated reward boosts for high-reward items compared to low- and equal-reward items. Moreover, the VWM performance was higher in the monetary reward condition than in the notional reward condition; however, there was no interaction between the reward method and item type. Additionally, a significant interaction was found between the reward number and item type: Reward boosts on VWM performance occurred only when one or two higher reward items were present. In conclusion, reward boosts in VWM tasks are modulated by task difficulty but not the reward method.  相似文献   

6.
Studying the sources of errors in memory recall has proven invaluable for understanding the mechanisms of working memory (WM). While one-dimensional memory features (e.g., color, orientation) can be analyzed using existing mixture modeling toolboxes to separate the influence of imprecision, guessing, and misbinding (the tendency to confuse features that belong to different memoranda), such toolboxes are not currently available for two-dimensional spatial WM tasks.Here we present a method to isolate sources of spatial error in tasks where participants have to report the spatial location of an item in memory, using two-dimensional mixture models. The method recovers simulated parameters well and is robust to the influence of response distributions and biases, as well as number of nontargets and trials.To demonstrate the model, we fit data from a complex spatial WM task and show the recovered parameters correspond well with previous spatial WM findings and with recovered parameters on a one-dimensional analogue of this task, suggesting convergent validity for this two-dimensional modeling approach. Because the extra dimension allows greater separation of memoranda and responses, spatial tasks turn out to be much better for separating misbinding from imprecision and guessing than one-dimensional tasks.Code for these models is freely available in the MemToolbox2D package and is integrated to work with the commonly used MATLAB package MemToolbox.  相似文献   

7.
Most objects show high degrees of spatial regularity (e.g. beach umbrellas appear above, not under, beach chairs). The spatial regularities of real-world objects benefit visual working memory (VWM), but the mechanisms behind this spatial regularity effect remain unclear. The “encoding specificity” hypothesis suggests that spatial regularity will enhance the visual encoding process but will not facilitate the integration of information online during VWM maintenance. The “perception-alike” hypothesis suggests that spatial regularity will function in both visual encoding and online integration during VWM maintenance. We investigated whether VWM integrates sequentially presented real-world objects by focusing on the existence of the spatial regularity effect. Throughout five experiments, we manipulated the presentation (simultaneous vs. sequential) and regularity (with vs. without regularity) of memory arrays among pairs of real-world objects. The spatial regularity of memory objects presented simultaneously, but not sequentially, improved VWM performance. We also examined whether memory load, verbal suppression and masking, and memory array duration hindered the spatial regularity effect in sequential presentation. We found a stable absence of the spatial regularity effect, suggesting that the participants were unable to integrate real-world objects based on spatial regularities online. Our results support the encoding specificity hypothesis, wherein the spatial regularity of real-world objects can enhance the efficiency of VWM encoding, but VWM cannot exploit spatial regularity to help organize sampled sequential information into meaningful integrations.  相似文献   

8.
What are the contents of working memory? In both behavioral and neural computational models, a working memory representation is typically described by a single number, namely, a point estimate of a stimulus. Here, we asked if people also maintain the uncertainty associated with a memory and if people use this uncertainty in subsequent decisions. We collected data in a two-condition orientation change detection task; while both conditions measured whether people used memory uncertainty, only one required maintaining it. For each condition, we compared an optimal Bayesian observer model, in which the observer uses an accurate representation of uncertainty in their decision, to one in which the observer does not. We find that this “Use Uncertainty” model fits better for all participants in both conditions. In the first condition, this result suggests that people use uncertainty optimally in a working memory task when that uncertainty information is available at the time of decision, confirming earlier results. Critically, the results of the second condition suggest that this uncertainty information was maintained in working memory. We test model variants and find that our conclusions do not depend on our assumptions about the observer''s encoding process, inference process, or decision rule. Our results provide evidence that people have uncertainty that reflects their memory precision on an item-specific level, maintain this information over a working memory delay, and use it implicitly in a way consistent with an optimal observer. These results challenge existing computational models of working memory to update their frameworks to represent uncertainty.  相似文献   

9.
Eye movements produce shifts in the positions of objects in the retinal image, but observers are able to integrate these shifting retinal images into a coherent representation of visual space. This ability is thought to be mediated by attention-dependent saccade-related neural activity that is used by the visual system to anticipate the retinal consequences of impending eye movements. Previous investigations of the perceptual consequences of this predictive activity typically infer attentional allocation using indirect measures such as accuracy or reaction time. Here, we investigated the perceptual consequences of saccades using an objective measure of attentional allocation, reverse correlation. Human observers executed a saccade while monitoring a flickering target object flanked by flickering distractors and reported whether the average luminance of the target was lighter or darker than the background. Successful task performance required subjects to integrate visual information across the saccade. A reverse correlation analysis yielded a spatiotemporal “psychophysical kernel” characterizing how different parts of the stimulus contributed to the luminance decision throughout each trial. Just before the saccade, observers integrated luminance information from a distractor located at the post-saccadic retinal position of the target, indicating a predictive perceptual updating of the target. Observers did not integrate information from distractors placed in alternative locations, even when they were nearer to the target object. We also observed simultaneous predictive perceptual updating for two spatially distinct targets. These findings suggest both that shifting neural representations mediate the coherent representation of visual space, and that these shifts have significant consequences for transsaccadic perception.  相似文献   

10.
When storing multiple objects in visual working memory, observers sometimes misattribute perceived features to incorrect locations or objects. These misattributions are called binding errors (or swaps) and have been previously demonstrated mostly in simple objects whose features are easy to encode independently and arbitrarily chosen, like colors and orientations. Here, we tested whether similar swaps can occur with real-world objects, where the connection between features is meaningful rather than arbitrary. In Experiments 1 and 2, observers were simultaneously shown four items from two object categories. Within a category, the two exemplars could be presented in either the same or different states (e.g., open/closed; full/empty). After a delay, both exemplars from one of the categories were probed, and participants had to recognize which exemplar went with which state. We found good memory for state information and exemplar information on their own, but a significant memory decrement for exemplar–state combinations, suggesting that binding was difficult for observers and swap errors occurred even for meaningful real-world objects. In Experiment 3, we used the same task, but in one-half of the trials, the locations of the exemplars were swapped at test. We found that there are more errors in general when the locations of exemplars were swapped. We concluded that the internal features of real-world objects are not perfectly bound in working memory, and location updates impair object and feature representations. Overall, we provide evidence that even real-world objects are not stored in an entirely unitized format in working memory.  相似文献   

11.
The “irrelevant-change distracting effect” refers to the effect of changes in irrelevant features on the performance of the target feature, which has frequently been used to study information processing in visual working memory (VWM). In the current study, we reported a novel interference effect in VWM: the topological-change interference effect (TCIE). In a series of six experiments, we examined the influence of topological and nontopological changes as irrelevant features on VWM using a color change detection paradigm. The results revealed that only topological changes, although task irrelevant, could produce a significant interference effect. In contrast, nontopological changes did not produce any evident interference effect. Moreover, the TCIE was a stable and lasting effect, regardless of changes in locations, reporting methods, particular stimulus figures, the other salient feature dimensions and delay interval times. Therefore, our results support the notion that topological invariance that defines perceptual objects plays an essential role in maintaining representations in VWM.  相似文献   

12.
Accurate saccadic and vergence eye movements towards selected visual targets are fundamental to perceive the 3-D environment. Despite this importance, shifts in eye gaze are not always perfect given that they are frequently followed by small corrective eye movements. The oculomotor system receives distinct information from various visual cues that may cause incongruity in the planning of a gaze shift. To test this idea, we analyzed eye movements in humans performing a saccade task in a 3-D setting. We show that saccades and vergence movements towards peripheral targets are guided by monocular (perceptual) cues. Approximately 200 ms after the start of fixation at the perceived target, a fixational saccade corrected the eye positions to the physical target location. Our findings suggest that shifts in eye gaze occur in two phases; a large eye movement toward the perceived target location followed by a corrective saccade that directs the eyes to the physical target location.  相似文献   

13.
Background: Leg‐before‐wicket (LBW) dismissals in cricket require exacting visual judgements by umpires, a task complicated by the requirement to also adjudicate on where the bowler's front foot is relative to the popping crease line about the time the ball leaves the bowler's hand. The umpire must then judge as soon as possible afterwards whether the direction of the flight of the ball is in line with the batsman's stumps. This study investigated whether the accuracy of cricket umpires' decision‐making in leg‐before‐wicket dismissals is affected by varying the method by which umpires monitor bowlers' feet at the point of delivery of the ball. Methods: Four umpires officiating under simulated match conditions reported their judgements of whether each delivery they observed pitched in line with the stumps. They did so under one of three conditions: watching the bowler's front foot, watching the bowler's back foot prior to the delivery of the ball by the bowler or not monitoring the bowler's feet (‘no foot’ condition). Video recording, assisted by the use of superimposed stump‐to‐stump lines, was used to assess the accuracy of the umpires' responses. Results: There was no statistically significant difference in umpires' performance when comparing the front foot condition to the back foot condition but performance for the ‘no foot’ condition was significantly better than for the front foot condition. Conclusions: These results suggest that umpires' performance judging LBW dismissals would be improved if they did not have to monitor bowlers' feet to adjudicate ‘no‐ball’ deliveries but there would be no benefit from a reversion from the current ‘front foot’ no‐ball law to the previously used back foot law.  相似文献   

14.
The ability to accurately retain the binding between the features of different objects is a critical element of visual working memory. The underlying mechanism can be elucidated by analyzing correlations of response errors in dual-report experiments, in which participants have to report two features of a single item from a previously viewed stimulus array. Results from separate previous studies using different cueing conditions have indicated that location takes a privileged role in mediating binding between other features, in that largely independent response errors have been observed when location was used as a cue, but errors were highly correlated when location was one of the reported features. Earlier results from change detection tasks likewise support such a special role of location, but they also suggest that this role is substantially reduced for longer retention intervals in favor of object-based representation. In the present study, we replicated the findings of previous dual-report tasks with different cueing conditions, using matched stimuli and procedures. Moreover, we show that the observed patterns of error correlations remain qualitatively unchanged with longer retention intervals. Fits with neural population models demonstrate that the behavioral results at long, as well as short, delays are best explained by memory representations in independent feature maps, in which an item''s features are bound to each other only via their shared location.  相似文献   

15.
The effectiveness of different parts of the retina in mediating optokinetic tracking of a moving pattern (OKN) has been tested with relatively small random dot patterns (diameter 30°) that were moved in eight directions. The retinal location of the stimulus was fixed by immobilization of the eye or by servo-control of the stimulus position by the eye position. The area of maximal optokinetic sensivity was coextensive with the visual streak. The total sensitive area extended between 50° superior. 10° inferior, 75° posterior and 100° anterior in the visual field of one eye. Beyond this area no OKN could be elicited. OKN smooth phase velocity was maximal for stimulation in anterior direction, about 3 times smaller for vertical movement and 20 times smaller for posterior movement. Only the cooperation of the two eyes can assure balanced responses in all directions. Such a design supports the function of the rabbit's OKN in global stabilization, but is unsuitable for the pursuit of small objects.  相似文献   

16.
Purpose: Under monocular viewing conditions, humans and monkeys with infantile strabismus exhibit asymmetric naso-temporal (N-T) responses to motion stimuli. The goal of this study was to compare and contrast these N-T asymmetries during 3 visually mediated eye tracking tasks—optokinetic nystagmus (OKN), smooth pursuit (SP) response, and ocular following responses (OFR). Methods: Two adult strabismic monkeys were tested under monocular viewing conditions during OKN, SP, or OFR stimulation. OKN stimulus was unidirectional motion of a 30°x30° random dot pattern at 20°, 40°, or 80°/s for 1 minute. OFR stimulus was brief (200 ms) unidirectional motion of a 38°x28°whitenoise at 20°, 40°, or 80°/s. SP stimulus consisted of foveal step-ramp target motion at 10°, 20°, or 40°/s. Results: Mean nasalward steady state gain (0.87±0.16) was larger than temporalward gain (0.67±0.19) during monocular OKN (P<0.001). In monocular OFR, the asymmetry is manifested as a difference in OFR velocity gain (nasalward: 0.33±0.19, temporalward: 0.22±0.12; P=0.007). During monocular SP, mean nasal gain (0.97±0.2) was larger than temporal gain (0.66±0.14; P<0.001) and the mean nasalward acceleration during pursuit initiation (156±61°/s2) was larger than temporalward acceleration (118±77°/s2; P=0.04). Comparison of N-T asymmetry ratio across the 3 conditions using ANOVA showed no significant difference. Conclusions: N-T asymmetries are identified in all 3 visual tracking paradigms in both monkeys with either eye viewing. Our data are consistent with the current hypothesis for the mechanism for N-T asymmetry that invokes an imbalance in cortical drive to brainstem circuits.  相似文献   

17.
Face processing is a fast and efficient process due to its evolutionary and social importance. A majority of people direct their first eye movement to a featureless point just below the eyes that maximizes accuracy in recognizing a person''s identity and gender. Yet, the exact properties or features of the face that guide the first eye movements and reduce fixational variability are unknown. Here, we manipulated the presence of the facial features and the spatial configuration of features to investigate their effect on the location and variability of first and second fixations to peripherally presented faces. Our results showed that observers can utilize the face outline, individual facial features, and feature spatial configuration to guide the first eye movements to their preferred point of fixation. The eyes have a preferential role in guiding the first eye movements and reducing fixation variability. Eliminating the eyes or altering their position had the greatest influence on the location and variability of fixations and resulted in the largest detriment to face identification performance. The other internal features (nose and mouth) also contribute to reducing fixation variability. A subsequent experiment measuring detection of single features showed that the eyes have the highest detectability (relative to other features) in the visual periphery providing a strong sensory signal to guide the oculomotor system. Together, the results suggest a flexible multiple-cue approach that might be a robust solution to cope with how the varying eccentricities in the real world influence the ability to resolve individual feature properties and the preferential role of the eyes.  相似文献   

18.
19.
Purpose: Reasons for the development and progression of myopia remain unclear. Some studies show a high prevalence of myopia in certain occupational groups. This might imply that certain head and eye movements lead to ocular elongation, perhaps as a result of forces from the extraocular muscles, lids or other structures. The present study aims to analyse head and eye movements in myopes and non‐myopes for near‐vision tasks. Methods: The study analysed head and eye movements in a cohort of 14 myopic and 16 non‐myopic young adults. Eye and head movements were monitored by an eye tracker and a motion sensor while the subjects performed three near tasks, which included reading on a screen, reading a book and writing. Horizontal eye and head movements were measured in terms of angular amplitudes. Vertical eye and head movements were analysed in terms of the range of the whole movement during the recording. Values were also assessed as a ratio based on the width of the printed text, which changed between participants due to individual working distances. Results: Horizontal eye and head movements were significantly different among the three tasks (p = 0.03 and p = 0.014, for eye and head movements, respectively, repeated measures ANOVA). Horizontal and vertical eye and head movements did not differ significantly between myopes and non‐myopes. As expected, eye movements preponderated over head movements for all tasks and in both meridians. A positive correlation was found between mean spherical equivalent and the working distance for reading a book (r = 0.41; p = 0.025). Conclusions: The results show a similar pattern of eye movements in all participating subjects, although the amplitude of these movements varied considerably between the individuals. It is likely that some individuals when exposed to certain occupational tasks might show different eye and head movement patterns.  相似文献   

20.
Individuals with central visual field loss often use a preferred retinal locus (PRL) to compensate for their deficit. We present a case study examining the eye movements of a subject with Stargardt's disease causing bilateral central scotomas, while performing a set of natural tasks including: making a sandwich; building a model; reaching and grasping; and catching a ball. In general, the subject preferred to use PRLs in the lower left visual field. However, there was considerable variation in the location and extent of the PRLs used. Our results demonstrate that a well-defined PRL is not necessary to adequately perform this set of tasks and that many sites in the peripheral retina may be viable for PRLs, contingent on task and stimulus constraints.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号