首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Searching for an object in a cluttered environment takes advantage of different cues, explicit attentional cues, such as arrows, and visual cues, such as saliency, but also memory. Behavioral studies manipulating the spatial relationships between context and target in visual search suggest that the memory of context-target associations could be retrieved quickly and act at an early perceptual stage. On the other hand, neural responses are usually influenced by memory at a later, postperceptual stage. At which level of neural processing does the memory of context-target associations influence scene analysis? In our experiment, human subjects learned arbitrary associations between given spatial layouts of distractors and target positions while performing a classical visual search task. Behaviorally, context-target associations speed visual search times, although subjects remain fully unaware of these associations. Magneto-encephalographic responses to visual displays containing or not containing relevant contextual information differ before 100 ms, much earlier than any known effect of recent experience. This effect occurs bilaterally at occipital sensors only, suggesting that context affects activity in the underlying early sensory cortices. Importantly, subjects do not show any sign of explicit knowledge about context-target associations: The earliness of the influence of contextual knowledge may be a hallmark of unconscious memory.  相似文献   

2.
To understand the neural mechanisms underlying humans’ exquisite ability at processing briefly flashed visual scenes, we present a computer model that predicts human performance in a Rapid Serial Visual Presentation (RSVP) task. The model processes streams of natural scene images presented at a rate of 20 Hz to human observers, and attempts to predict when subjects will correctly detect if one of the presented images contains an animal (target). We find that metrics of Bayesian surprise, which models both spatial and temporal aspects of human attention, differ significantly between RSVP sequences on which subjects will detect the target (easy) and those on which subjects miss the target (hard). Extending beyond previous studies, we here assess the contribution of individual image features including color opponencies and Gabor edges. We also investigate the effects of the spatial location of surprise in the visual field, rather than only using a single aggregate measure. A physiologically plausible feed-forward system, which optimally combines spatial and temporal surprise metrics for all features, predicts performance in 79.5% of human trials correctly. This is significantly better than a baseline maximum likelihood Bayesian model (71.7%). We can see that attention as measured by surprise, accounts for a large proportion of observer performance in RSVP. The time course of surprise in different feature types (channels) provides additional quantitative insight in rapid bottom-up processes of human visual attention and recognition, and illuminates the phenomenon of attentional blink and lag-1 sparing. Surprise also reveals classical Type-B like masking effects intrinsic in natural image RSVP sequences. We summarize these with the discussion of a multistage model of visual attention.  相似文献   

3.
Previous research on the attentional influence of new objects and new motion in the environment has focused on studying these two visual features in isolation. In the present study, we examined new objects and new motion when they co-occurred within one scene. In addition, we evaluated the extent to which low-level luminance changes can contribute to the attention-capturing properties of each of these dynamic events. Results suggest that new objects have a larger impact on the allocation of attention than new motion and, under certain circumstances, the appearance of new objects may suppress the attentional benefit typically afforded to new motion. Lastly, our findings indicate that low-level factors account for some, but not all, of the attentional effects observed for new objects and new motion.  相似文献   

4.
Liu K  Jiang Y 《Journal of vision》2005,5(7):650-658
Previous studies have painted a conflicting picture on the amount of visual information humans can extract from viewing a natural scene briefly. Although some studies suggest that a single glimpse is sufficient to put about five visual objects in memory, others find that not much is retained in visual memory even after prolonged viewing. Here we tested subjects' visual working memory (VWM) for a briefly viewed scene image. A sample scene was presented for 250 ms and masked, followed 1000 ms later by a comparison display. We found that subjects remembered fewer than one sample object. Increasing the viewing duration to about 15 s significantly enhanced performance, with approximately five visual objects remembered. We suggest that adequate encoding of a scene into VWM requires a long duration, and that visual details can accumulate in memory provided that the viewing duration is sufficiently long.  相似文献   

5.
The relationship between part shape and location is not well elucidated in current theories of object recognition. Here we investigated the role of shape and location of object parts on recognition, using a classification priming paradigm with novel 3D objects. In Experiment 1, the relative displacement of two parts comprising the prime gradually reduced the priming effect. In Experiment 2, presenting single-part primes in locations progressively different from those in the composite target had no effect on priming. In Experiment 3, manipulating the relative position of composite prime and target strongly affected priming. Finally, in Experiment 4 the relative displacement of single-part primes and composite targets did influence response time. Together, these findings are best interpreted in terms of a hybrid theory, according to which conjunctions of shape and location are explicitly represented at some stage of visual object processing.  相似文献   

6.
Observers often fail to detect the appearance of an unexpected visual object ("inattentional blindness"). Experiment 1 studied the effects of fixation position and spatial attention on inattentional blindness. Eye movements were measured. We found strong inattentional blindness to the unexpected stimulus even when it was fixated and appeared in one of the expected positions. The results suggest that spatial attention is not sufficient for attentional capture and awareness. Experiment 2 showed that the stimulus was easier to consciously detect when it was colored but the relation of the color to the color of the attended objects had no effect on detection. The unexpected stimulus was easiest to detect, when it represented the same category as the attended objects.  相似文献   

7.
Hwang AD  Wang HC  Pomplun M 《Vision research》2011,51(10):1192-1205
The perception of objects in our visual world is influenced by not only their low-level visual features such as shape and color, but also their high-level features such as meaning and semantic relations among them. While it has been shown that low-level features in real-world scenes guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations has not been investigated. Here we study guidance of eye movements by semantic similarity among objects during real-world scene inspection and search. By selecting scenes from the LabelMe object-annotated image database and applying latent semantic analysis (LSA) to the object labels, we generated semantic saliency maps of real-world scenes based on the semantic similarity of scene objects to the currently fixated object or the search target. An ROC analysis of these maps as predictors of subjects’ gaze transitions between objects during scene inspection revealed a preference for transitions to objects that were semantically similar to the currently inspected one. Furthermore, during the course of a scene search, subjects’ eye movements were progressively guided toward objects that were semantically similar to the search target. These findings demonstrate substantial semantic guidance of eye movements in real-world scenes and show its importance for understanding real-world attentional control.  相似文献   

8.
How do we see an object when it is partially obstructed from view? The neural mechanisms of this intriguing process are unclear, in part because studies of visual object perception heretofore have largely used stimuli of individual objects, such as faces or common inanimate objects, each presented alone. But in natural images, visual objects are typically occluded by other objects. Computational studies indicate that the perception of an occluded object requires processes that are substantially different from those for an unoccluded object in plain view. We studied the neural substrates of the perception of occluded objects using functional magnetic resonance imaging (fMRI) of human subjects viewing stimuli that were designed to elicit or not elicit the percept of an occluded object but were physically very similar. We hypothesized the regions that are selective for occluded objects, if they exist, will be differentially active during the two conditions. We found two regions, one in the ventral object processing pathway and another in the dorsal object processing pathway, that were significantly responsive to occluded objects. More importantly, both regions were significantly more responsive to occluded objects than to unoccluded objects, and this enhanced response was not attributable to low-level differences in the stimuli, amodal completion per se, or the behavioral task. Our results identify regions in the visual cortex that are preferentially responsive to occluded objects relative to other stimuli tested and indicate that these regions are likely to play an important role in the perception of occluded objects.  相似文献   

9.
In human subjects, two mechanisms for improving the efficiency of saccades in visual search have recently been described: color priming and concurrent processing of two saccades. Since the monkey provides an important model for understanding the neural underpinnings of target selection in visual search, we sought to explore the degree to which the saccadic system of monkeys uses these same mechanisms. Therefore, we recorded the eye movements of rhesus monkeys performing a simple color-oddity pop-out search task, similar to that used previously with human subjects. The monkeys were rewarded for making a saccade to the odd-colored target, which was presented with an array of three distractors. The target and distractors were randomly chosen to be red or green in each trial. Similar to what was previously observed for humans, we found that monkeys show the influence of a cumulative, short-term priming mechanism which facilitates saccades when the color of the search target happens to repeat from trial to trial. Furthermore, we found that like humans, when monkeys make an erroneous initial saccade to a distractor, they are capable of executing a second saccade to the target after a very brief inter-saccadic interval, suggesting that the two saccades have been programmed concurrently (i.e. in parallel). These results demonstrate a close similarity between human and monkey performance. We also made a new observation: we found that when monkeys make such two-saccade responses, the trajectory of the initial saccade tends to curve toward the goal of the subsequent saccade. This provides evidence that the two saccade goals are simultaneously represented on a common motor map, supporting the idea that the movements are processed concurrently. It also indicates that concurrent processing is not limited to brain areas involved in higher-level planning; rather, such parallel programming apparently occurs at a low enough level in the saccadic system that it can affect saccade trajectory.  相似文献   

10.
How do spatial constraints and meaningful scene regions interact to control overt attention during visual search for objects in real-world scenes? To answer this question, we combined novel surface maps of the likely locations of target objects with maps of the spatial distribution of scene semantic content. The surface maps captured likely target surfaces as continuous probabilities. Meaning was represented by meaning maps highlighting the distribution of semantic content in local scene regions. Attention was indexed by eye movements during the search for target objects that varied in the likelihood they would appear on specific surfaces. The interaction between surface maps and meaning maps was analyzed to test whether fixations were directed to meaningful scene regions on target-related surfaces. Overall, meaningful scene regions were more likely to be fixated if they appeared on target-related surfaces than if they appeared on target-unrelated surfaces. These findings suggest that the visual system prioritizes meaningful scene regions on target-related surfaces during visual search in scenes.  相似文献   

11.
Attentional and oculomotor capture by onset, luminance and color singletons   总被引:6,自引:0,他引:6  
Irwin DE  Colcombe AM  Kramer AF  Hahn S 《Vision research》2000,40(10-12):1443-1458
In three experiments we investigated whether attentional and oculomotor capture occur only when object-defining abrupt onsets are used as distractors in a visual search task, or whether other salient stimuli also capture attention and the eyes even when they do not constitute new objects. The results showed that abrupt onsets (new objects) are especially effective in capturing attention and the eyes, but that luminance increments that do not accompany the appearance of new objects capture attention as well. Color singletons do not capture attention unless subjects have experienced the color singleton as a search target in a previous experimental session. Both abrupt onsets and luminance increments elicit reflexive, involuntary saccades whereas transient color changes do not. Implications for theories of attentional capture are discussed.  相似文献   

12.
The concept of global precedence, which suggests that the global aspect of a scene is processed more rapidly than local details, was examined using the attentional blink paradigm. Eighteen adult subjects observed multiple sequences of complex global-local letter figures to see whether the attentional blink duration would be affected by the visual angle size of the stimulus. Within each sequence, the subject was directed to identify either a global or local red target letter and to detect whether a global or local probe letter (X) was presented in the sequence following the target letter. Stimuli were presented at three different sizes. Results showed significantly higher probe detection rates for global probes than for local at small stimulus sizes. However, using large stimulus sizes, mean correct probe detection was significantly higher in conditions requiring local attention compared to global. No significant difference in probe detection performance was observed between global and local conditions at medium stimulus sizes. The results suggest that the rate of visual information processing varies according to the visual angle of the particular information. The results support the suggestion that the precedence of information is an important factor in the temporal processing of global-local information.  相似文献   

13.
Researchers have investigated whether attentional capture during visual search is driven by top-down processes, i.e. experimental goals and directives, or by bottom-up processes, i.e. the properties of the items within a search display. Some research has demonstrated that subjects cannot avoid attending to a task-irrelevant salient item, such as a singleton distractor, even when the identity of the target item is known. Research has also shown that repeating the target feature across successive search displays will prime the visual pop out effect for a unique target (priming of pop out). However, other research has shown that subjects can strategically guide their attention and may locate a target based on its uniqueness (a singleton search mode) or based on knowing and searching for the target feature (a feature search mode). When using the feature search mode subjects are attuned to the specific target feature and are therefore less susceptible to singleton distractor interference than when using the singleton search mode. Recent research has compared singleton distractor interference for targets that are variable and uncertain to targets that are constant and certain across search displays. When the target is constant subjects can use a feature search mode and should theoretically demonstrate less singleton distractor interference than when targets are variable and they must use a singleton search mode. Indeed, variable targets have historically demonstrated greater singleton distractor interference than constant targets, even when the target feature has been repeated. However, the current experiments found that singleton distractor interference was no greater for variable targets than for constant targets when targets and nontargets did not share shapes across search displays.  相似文献   

14.
At the onset of bistable stimuli, the brain needs to choose which of the competing perceptual interpretations will first reach awareness. Stimulus manipulations and cognitive control both influence this choice process, but the underlying mechanisms and interactions remain poorly understood. Using intermittent presentation of bistable visual stimuli, we demonstrate that short interruptions cause perceptual reversals upon the next presentation, whereas longer interstimulus intervals stabilize the percept. Top-down voluntary control biases this process but does not override the timing dependencies. Extending a recently introduced low-level neural model, we demonstrate that percept-choice dynamics in bistable vision can be fully understood with interactions in early neural processing stages. Our model includes adaptive neural processing preceding a rivalry resolution stage with cross-inhibition, adaptation, and an interaction of the adaptation levels with a neural baseline. Most importantly, our findings suggest that top-down attentional control over bistable stimuli interacts with low-level mechanisms at early levels of sensory processing before perceptual conflicts are resolved and perceptual choices about bistable stimuli are made.  相似文献   

15.
Seeing, sensing, and scrutinizing   总被引:7,自引:0,他引:7  
Rensink RA 《Vision research》2000,40(10-12):1469-1487
Large changes in a scene often become difficult to notice if made during an eye movement, image flicker, movie cut, or other such disturbance. It is argued here that this change blindness can serve as a useful tool to explore various aspects of vision. This argument centers around the proposal that focused attention is needed for the explicit perception of change. Given this, the study of change perception can provide a useful way to determine the nature of visual attention, and to cast new light on the way that it is - and is not - involved in visual perception. To illustrate the power of this approach, this paper surveys its use in exploring three different aspects of vision. The first concerns the general nature of seeing. To explain why change blindness can be easily induced in experiments but apparently not in everyday life, it is proposed that perception involves a virtual representation, where object representations do not accumulate, but are formed as needed. An architecture containing both attentional and nonattentional streams is proposed as a way to implement this scheme. The second aspect concerns the ability of observers to detect change even when they have no visual experience of it. This sensing is found to take on at least two forms: detection without visual experience (but still with conscious awareness), and detection without any awareness at all. It is proposed that these are both due to the operation of a nonattentional visual stream. The final aspect considered is the nature of visual attention itself - the mechanisms involved when scrutinizing items. Experiments using controlled stimuli show the existence of various limits on visual search for change. It is shown that these limits provide a powerful means to map out the attentional mechanisms involved.  相似文献   

16.
The representation of shape mediating visual object priming was investigated. In two blocks of trials, subjects named images of common objects presented for 185 ms that were bandpass filtered, either at high (10 cpd) or at low (2 cpd) center frequency with a 1.5 octave bandwidth, and positioned either 5 degrees right or left of fixation. The second presentation of an image of a given object type could be filtered at the same or different band, be shown at the same or translated (and mirror reflected) position, and be the same exemplar as that in the first block or a same-name different-shaped exemplar (e.g. a different kind of chair). Second block reaction times (RTs) and error rates were markedly lower than they were on the first block, which, in the context of prior results, was indicative of strong priming. A change of exemplar in the second block resulted in a significant cost in RTs and error rates, indicating that a portion of the priming was visual and not just verbal or basic-level conceptual. However, a change in the spatial frequency (SF) content of the image had no effect on priming despite the dramatic difference it made in appearance of the objects. This invariance to SF changes was also preserved with centrally presented images in a second experiment. Priming was also invariant to a change in left-right position (and mirror orientation) of the image. The invariance over translation of such a large magnitude suggests that the locus of the representation mediating the priming is beyond an area that would be homologous to posterior TEO in the monkey. We conclude that this representation is insensitive to low level image variations (e.g. SF, precise position or orientation of features) that do not alter the basic part-structure of the object. Finally, recognition performance was unaffected by whether low or high bandpassed images were presented either in the left or right visual field, giving no support to the hypothesis of hemispheric differences in processing low and high spatial frequencies.  相似文献   

17.
In seven experiments, observers searched for a scrambled object among normal objects. The critical comparison was between repeated search in which the same set of stimuli remained present in fixed positions in the display for many (>100) trials and unrepeated conditions in which new stimuli were presented on each trial. In repeated search conditions, observers monitored an essentially stable display for the disruption of a clearly visible object. This is an extension of repeated search experiments in which subjects search a fixed set of items for different targets on each trial (Wolfe, Klempen, & Dahlen, 2000) and can be considered as a form of a "change blindness" task. The unrepeated search was very inefficient, showing that a scrambled object does not "pop-out" among intact objects (or vice versa). Interestingly, the repeated search condition was just as inefficient, as if participants had to search for the scrambled target even after extensive experience with the specific change in the specific scene. The results suggest that the attentional processes involved in searching for a target in a novel scene may be very similar to those used to confirm the presence of a target in a familiar scene.  相似文献   

18.
Eye movements produce shifts in the positions of objects in the retinal image, but observers are able to integrate these shifting retinal images into a coherent representation of visual space. This ability is thought to be mediated by attention-dependent saccade-related neural activity that is used by the visual system to anticipate the retinal consequences of impending eye movements. Previous investigations of the perceptual consequences of this predictive activity typically infer attentional allocation using indirect measures such as accuracy or reaction time. Here, we investigated the perceptual consequences of saccades using an objective measure of attentional allocation, reverse correlation. Human observers executed a saccade while monitoring a flickering target object flanked by flickering distractors and reported whether the average luminance of the target was lighter or darker than the background. Successful task performance required subjects to integrate visual information across the saccade. A reverse correlation analysis yielded a spatiotemporal “psychophysical kernel” characterizing how different parts of the stimulus contributed to the luminance decision throughout each trial. Just before the saccade, observers integrated luminance information from a distractor located at the post-saccadic retinal position of the target, indicating a predictive perceptual updating of the target. Observers did not integrate information from distractors placed in alternative locations, even when they were nearer to the target object. We also observed simultaneous predictive perceptual updating for two spatially distinct targets. These findings suggest both that shifting neural representations mediate the coherent representation of visual space, and that these shifts have significant consequences for transsaccadic perception.  相似文献   

19.
PURPOSE: Visual attention, normally focused on the center of the visual field, can be shifted to a location in the periphery. This process facilitates the recognition of objects in the attended region. The present experiment was designed to investigate the time course of sustained attention that is known to augment stimulus perception in normal subjects. METHODS: Cortical activity of the human brain related to shifts of the attentional focus was examined with magnetoencephalography. Subjects had to identify a stimulus presented on a screen at one of two locations in the periphery of their visual fields. Sustained attention was either deployed toward the target by a preceding cue or not. RESULTS: Results confirmed a reaction time advantage on recognizing objects in the part of the visual field where attention had been deployed. A stronger magnetic brain response was detected for noncued targets at a latency of 260 to 380 ms after target onset. Source localization revealed a neuronal generator of the attention-related component in the parietal cortex. CONCLUSIONS: Sustained attention facilitates target detection. The component that is localized in the parieto-occipital cortex in the noncued condition is thought to reflect a transient shift of attention toward the target location.  相似文献   

20.
Short latency ocular-following responses in man   总被引:5,自引:0,他引:5  
The ocular-following responses elicited by brief unexpected movements of the visual scene were studied in human subjects. Response latencies varied with the type of stimulus and decreased systematically with increasing stimulus speed but, unlike those of monkeys, were not solely determined by the temporal frequency generated by sine-wave stimuli. Minimum latencies (70-75 ms) were considerably shorter than those reported for other visually driven eye movements. The magnitude of the responses to sine-wave stimuli changed markedly with stimulus speed and only slightly with spatial frequency over the ranges used. When normalized with respect to spatial frequency, all responses shared the same dependence on temporal frequency (band-pass characteristics with a peak at 16 Hz), indicating that temporal frequency, rather than speed per se, was the limiting factor over the entire range examined. This suggests that the underlying motion detectors respond to the local changes in luminance associated with the motion of the scene. Movements of the scene in the immediate wake of a saccadic eye movement were on average twice as effective as movements 600 ms later: post-saccadic enhancement. Less enhancement was seen in the wake of saccade-like shifts of the scene, which themselves elicited weak ocular following, something not seen in the wake of real saccades. We suggest that there are central mechanisms that, on the one hand, prevent the ocular-following system from tracking the visual disturbances created by saccades but, on the other, promote tracking of any subsequent disturbance and thereby help to suppress post-saccadic drift. Partitioning the visual scene into central and peripheral regions revealed that motion in the periphery can exert a weak modulatory influence on ocular-following responses resulting from motion at the center. We suggest that this may help the moving observer to stabilize his/her eyes on nearby stationary objects.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号