首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In four variants of a speeded target detection task, we investigated the processing of color and motion signals in the human visual system. Participants were required to attend to both a particular color and direction of motion in moving random dot patterns (RDPs) and to report the appearance of the designated targets. Throughout, reaction times (RTs) to simultaneous presentations of color and direction targets were too fast to be reconciled with models proposing separate and independent processing of such stimulus dimensions. Thus, the data provide behavioral evidence for an integration of color and motion signals. This integration occurred even across superimposed surfaces in a transparent motion stimulus and also across spatial locations, arguing against object- and location-based accounts of attentional selection in such a task. Overall, the pattern of results can be best explained by feature-based mechanisms of visual attention.  相似文献   

2.
Gheri C  Baldassi S 《Vision research》2008,48(22):2352-2358
Crowding of oriented signals has been explained as linear, compulsory averaging of the signals from target and flankers [Parkes, L., Lund, J., Angelucci, A., Solomon, J. A., & Morgan, M. (2001). Compulsory averaging of crowded orientation signals in human vision. Nature Neuroscience, 4(7), 739-744]. On the other hand, a comparable search task with sparse stimuli is well modeled by a ‘Signed-Max’ rule that integrates non-linearly local tilt estimates [Baldassi, S., & Verghese, P. (2002). Comparing integration rules in visual search. Journal of Vision, 2(8), 559-570], as reflected by the bimodality of the distributions of reported tilts in a magnitude matching task [Baldassi, S., Megna, N., & Burr, D. C. (2006). Visual clutter causes high-magnitude errors. PLoS Biology, 4(3), e56]. This study compares the two models in the context of crowding by using a magnitude matching task, to measure distributions of perceived target angles and a localization task, to probe the degree of access to local information. Response distributions were bimodal, implying uncertainty, only in the presence of abutting flankers. Localization of the target is relatively preserved but it quantitatively falls in between the predictions of the two models, possibly suggesting local averaging followed by a max operation. This challenges the notion of global averaging and suggests some conscious access to local orientation estimates.  相似文献   

3.
Search performance for a target tilted in a known direction among vertical distractors is well explained by signal detection theory models. Typically these models use a maximum-of-outputs rule (Max rule) to predict search performance. The Max rule bases its decision on the largest response from a set of independent noisy detectors. When the target is tilted in either direction from the reference orientation and the task is to identify the sign of tilt, the loss of performance with set size is much greater than predicted by the Max rule. Here we varied the target tilt and measured psychometric functions for identifying the direction of tilt from vertical. Measurements were made at different set sizes in the presence of various levels of orientation jitter. The orientation jitter was set at multiples of the estimated internal noise, which was invariant across set sizes and measurement techniques. We then compared the data to the predictions of two models: a Summation model that integrates both signal and noise from local detectors and a Signed-Max model that first picks the maxima on both sides of vertical and then chooses the value with the highest absolute deviation from the reference. Although the function relating thresholds to set size had a slope consistent with both the Signed-Max and the Summation models, the shape of individual psychometric functions was in the most crucial conditions better predicted by the Signed-Max model, which chooses the largest tilt while keeping track of the direction of tilt.  相似文献   

4.
Self-motion through an environment involves a composite of signals such as visual and vestibular cues. Building upon previous results showing that visual and vestibular signals combine in a statistically optimal fashion, we investigated the relative weights of visual and vestibular cues during self-motion. This experiment was comprised of three experimental conditions: vestibular alone, visual alone (with four different standard heading values), and visual-vestibular combined. In the combined cue condition, inter-sensory conflicts were introduced (Δ = ±6° or ±10°). Participants performed a 2-interval forced choice task in all conditions and were asked to judge in which of the two intervals they moved more to the right. The cue-conflict condition revealed the relative weights associated with each modality. We found that even when there was a relatively large conflict between the visual and vestibular cues, participants exhibited a statistically optimal reduction in variance. On the other hand, we found that the pattern of results in the unimodal conditions did not predict the weights in the combined cue condition. Specifically, visual-vestibular cue combination was not predicted solely by the reliability of each cue, but rather more weight was given to the vestibular cue.  相似文献   

5.
Donk M  van Zoest W 《Vision research》2011,51(19):2156-2166
The present study aimed to investigate whether people can selectively use salience information in search for a target. Observers were presented with a display consisting of multiple homogeneously oriented background lines and two orientation singletons. The orientation singletons differed in salience, where salience was defined by their orientation contrast relative to the background lines. Observers had the task to make a speeded eye movement towards a target, which was either the most or the least salient element of the two orientation singletons. The specific orientation of the target was either constant or variable over a block of trials such that observers had varying knowledge concerning the target identity. The results demonstrated that instruction - whether people were instructed to move to the most or the least salient item - only minimally affected the results. Short-latency eye movements were completely salience driven; here it did not matter whether people were searching for the most or least salient element. Long-latency eye movements were marginally affected by instruction, in particular when observers knew the target identity. These results suggest that even though people use salience information in oculomotor selection, they cannot use this information in a goal-driven manner. The results are discussed in terms of current models on visual selection.  相似文献   

6.
Visual search can simply be defined as the task of looking for objects of interest in cluttered visual environments. Typically, the human visual system succeeds at this by making a series of rapid eye movements called saccades, interleaved by discrete fixations. However, very little is known on how the brain programs saccades and selects fixation loci in such naturalistic tasks. In the current study, we use a technique developed in our laboratory based on reverse-correlation1 and stimuli that emulate the natural visual environment to examine observers’ strategies when seeking low-contrast targets of various spatial frequency and orientation characteristics. We present four major findings. First, we provide strong evidence of visual guidance in saccadic targeting characterized by saccadic selectivity for spatial frequencies and orientations close to that of the search target. Second, we show that observers exhibit inaccuracies and biases in their estimates of target features. Third, a complementarity effect is generally observed: the absence of certain frequency components in distracters affects whether they are fixated or mistakenly selected as the target. Finally, an unusual phenomenon is observed whereby distracters containing close-to-vertical structures are fixated in searches for nonvertically oriented targets. Our results provide evidence for the involvement of band-pass mechanisms along feature dimensions (spatial frequency and orientation) during visual search.  相似文献   

7.
This research note assesses the role of target foreknowledge in visual search for categorically defined orientation targets, as first described by Wolfe et al. [Wolfe, J. M., Friedman-Hill, S. R., Stewart, M. I., & O'Connell, K. M. (1992). The role of categorisation in visual search for orientation. Journal of Experimental Psychology: Human Perception and Performance, 18, 34-49]. We compared search with known versus unknown (respond to the odd item) targets. An RT advantage for categorical search only emerged with known targets. The evidence points to an important role for top-down processes in search for categorically--defined orientation targets.  相似文献   

8.
Stimuli in one modality can affect the appearance and discriminability of stimuli in another, but how they do so is not well understood. Here we propose a theory of the integration of sensory information across modalities. This is based on criterion setting theory (CST; Treisman and Williams, 1984), an extension of signal detection theory which models the setting and adjustment of decision criteria. The theory of sensory integration based on CST (CST-SI) offers an account of cross-modal effects on sensory decision-making; here we consider its application to orientation anisotropy. In this case, CST-SI postulates that the postural senses are concerned with the relations between momentary body posture and the cardinal dimensions of space, vertical and horizontal, and that they also contribute to stabilizing perception of the cardinal orientations in vision through actions on the corresponding visual decision criteria, but that they have little effect on perception of diagonal orientations. Predictions from CST-SI are tested by experimentally separating the contributions that different information sources make to stabilizing the visual criteria. It is shown that reducing relevant kinaesthetic input may increase the variance for discrimination of the visual cardinal axes but not the obliques. Predictions that shift in the location of the psychometric function would be induced by varying the distribution of the test stimuli, and that this effect would be greater for oblique than cardinal axes were confirmed. In addition, peripheral visual stimuli were shown to affect the discrimination of cardinal but not oblique orientations at the focus of vision. These results support the present account of anisotropies.  相似文献   

9.
Visual search attracted great interest because its ease under certain circumstances seemed to provide a way to understand how properties of early visual cortical areas could explain complex perception without resorting to higher order psychological or neurophysiological mechanisms. Furthermore, there was the hope that properties of visual search itself might even reveal new cortical features or dimensions. The shortcomings of this perspective suggest that we abandon fixed canonical elementary particles of vision as well as a corresponding simple to complex cognitive architecture for vision. Instead recent research has suggested a different organization of the visual brain with putative high level processing occurring very rapidly and often unconsciously. Given this outlook, we reconsider visual search under the broad category of recognition tasks, each having different trade-offs for computational resources, between detail and scope. We conclude noting recent trends showing how visual search is relevant to a wider range of issues in cognitive science, in particular to memory, decision making, and reward.  相似文献   

10.
Westheimer G 《Vision research》2011,51(9):1058-1063
Whether position and orientation shifts induced by monocular context also act as a disparity for purposes of stereoscopy was investigated experimentally in order to examine the extent to which lateral spatial localization and stereoscopic depth share circuitry. A monocular tilt illusion in a line does not lead to a commensurate depth tilt of that line in binocular view, nor does a position shift in a bisection task caused by a gap within monocular dynamic random noise produce the commensurate depth displacement. Interocular transfer of monocularly-induced shifts, which might explain such findings, was eliminated as a factor. The results can therefore be interpreted as indicators of channeling and ordering of spatial signals paths in the visual cortex and imply that two-dimensional contextual interactions operate at a processing level beyond where disparity has already been extracted.  相似文献   

11.
12.
Color in visual search.   总被引:4,自引:0,他引:4  
M D'Zmura 《Vision research》1991,31(6):951-966
Colored targets pop out of displays under conditions in which the standard red-green, yellow-blue and black-white mechanisms cannot directly mediate detection. Experimental evidence suggests that observers possess chromatic detection mechanisms tuned to intermediate hues such as orange as well as to hues characterizing the standard color-opponent mechanisms and that these mechanisms, as a group, form a fine-grained representation of hue within the central visual field. Spatially-parallel search is mediated by a single such mechanism that is spectrally sensitive to the target chromaticity but insensitive to the distractor chromaticities; different mechanisms are used to detect a single target in a way that depends on distractor chromaticities.  相似文献   

13.
We demonstrate a strong sensory-motor coupling in visual localization in which experimental modification of the control of saccadic eye movements leads to an associated change in the perceived location of objects. Amplitudes of saccades to peripheral targets were altered by saccadic adaptation, induced by an artificial step of the saccade target during the eye movement, which leads the oculomotor system to recalibrate saccade parameters. Increasing saccade amplitudes induced concurrent shifts in perceived location of visual objects. The magnitude of perceptual shift depended on the size and persistence of errors between intended and actual saccade amplitudes. This tight agreement between the change of eye movement control and the change of localization shows that perceptual space is shaped by motor knowledge rather than simply constructed from visual input.  相似文献   

14.
Palmer J  Verghese P  Pavel M 《Vision research》2000,40(10-12):1227-1268
Most theories of visual search emphasize issues of limited versus unlimited capacity and serial versus parallel processing. In the present article, we suggest a broader framework based on two principles, one empirical and one theoretical. The empirical principle is to focus on conditions at the intersection of visual search and the simple detection and discrimination paradigms of spatial vision. Such simple search conditions avoid artifacts and phenomena specific to more complex stimuli and tasks. The theoretical principle is to focus on the distinction between high and low threshold theory. While high threshold theory is largely discredited for simple detection and discrimination, it persists in the search literature. Furthermore, a low threshold theory such as signal detection theory can account for some of the phenomena attributed to limited capacity or serial processing. In the body of this article, we compare the predictions of high threshold theory and three versions of signal detection theory to the observed effects of manipulating set size, discriminability, number of targets, response bias, external noise, and distractor heterogeneity. For almost all cases, the results are inconsistent with high threshold theory and are consistent with all three versions of signal detection theory. In the Discussion, these simple theories are generalized to a larger domain that includes search asymmetry, multidimensional judgements including conjunction search, response time, search with multiple eye fixations and more general stimulus conditions. We conclude that low threshold theories can account for simple visual search without invoking mechanisms such as limited capacity or serial processing.  相似文献   

15.
Champion RA  Warren PA 《Vision research》2008,48(17):1820-1830
In order to compute a representation of an object's size within a 3D scene, the visual system must scale retinal size by an estimate of the distance to the object. Evidence from size discrimination and visual search studies suggests that we have no access to the representation of retinal size when performing such tasks. In this study we investigate whether observers have early access to retinal size prior to scene size. Observer performance was assessed in a visual search task (requiring search within a 3D scene) in which processing was interrupted at a range of short presentation times. If observers have access to retinal size then we might expect to find a presentation time before which observers behave as if using retinal size and after which they behave as if using scene size. Observers searched for a larger or smaller target object within a group of objects viewed against a textured plane slanted at 0 degrees or 60 degrees . Stimuli were presented for 100, 200, 400 or 800ms and immediately followed by a mask. We measured the effect of target location within a stimulus (near vs. far) on task performance and how this was influenced by the background slant. The results of experiments 1 and 2 suggest that background slant had a significant influence on performance at all presentation times consistent with the use of scene size and not retinal size. Experiment 3 shows that this finding cannot be explained by a 2D texture contrast effect. Experiment 4 indicates that contextual information learned across a block of trials could be an important factor in such visual search experiments. In spite of this finding, our results suggest that distance scaling may occur prior to 100ms and we find no clear evidence for explicit access to a retinal representation of size.  相似文献   

16.
In two samples, we demonstrate that visual search performance is influenced by memory for the locations of specific search items across trials. We monitored eye movements as observers searched for a target letter in displays containing 16 or 24 letters. From trial to trial the configuration of the search items was either Random, fully Repeated or similar but not identical (i.e., Intermediate). We found a graded pattern of response times across conditions with slowest times in the Random condition and fastest responses in the Repeated condition. We also found that search was comparably efficient in the Intermediate and Random conditions but more efficient in the Repeated condition. Importantly, the target on a given trial was fixated more accurately in the Repeated and Intermediate conditions relative to the Random condition. We suggest a tradeoff between memory and perception in search as a function of the physical scale of the search space.  相似文献   

17.
H C Nothdurft 《Vision research》1999,39(14):2305-2310
Visual search operates in different modes assumed to reflect serial and parallel processing. The basis of this distinction is not yet clear. It is often assumed that serial search involves sequential shifts of focal attention across a scene and that no such shifts occur in parallel search. Direct measurements of attention effects during search show that the focus of attention moves to the target (and away from non-targets) both in serial and parallel search. This suggests that the two search modes do not differ in their attentional load but perhaps in the way in which focal attention is directed to the target.  相似文献   

18.
Visual cognition depends critically on the moment-to-moment orientation of gaze. To change the gaze to a new location in space, that location must be computed and used by the oculomotor system. One of the most common sources of information for this computation is the visual appearance of an object. A crucial question is: How is the appearance information contained in the photometric array is converted into a target position? This paper proposes a such a model that accomplishes this calculation. The model uses iconic scene representations derived from oriented spatiochromatic filters at multiple scales. Visual search for a target object proceeds in a coarse-to-fine fashion with the target's largest scale filter responses being compared first. Task-relevant target locations are represented as saliency maps which are used to program eye movements. A central feature of the model is that it separates the targeting process, which changes gaze, from the decision process, which extracts information at or near the new gaze point to guide behavior. The model provides a detailed explanation for center-of-gravity saccades that have been observed in many previous experiments. In addition, the model's targeting performance has been compared with the eye movements of human subjects under identical conditions in natural visual search tasks. The results show good agreement both quantitatively (the search paths are strikingly similar) and qualitatively (the fixations of false targets are comparable).  相似文献   

19.
Concurrent processing of saccades in visual search   总被引:3,自引:0,他引:3  
We provide evidence that the saccadic system can simultaneously program two saccades to different goals. We presented subjects with simple visual search displays in which they were required to make a saccade to an odd-colored target embedded in an array of distractors. When there was strong competition between target and distractor stimuli (due to color priming from previous trials), subjects were more likely to make a saccade to a distractor. Such error saccades were often followed, after a very short inter-saccadic interval ( approximately 10-100 ms), by a second saccade to the target. The brevity of these inter-saccadic intervals suggests that the programming of the two saccades (one to a distractor and one to the target) overlapped in time. Using a saccade-contingent change in the search display, we show that new visual information presented during the initial saccade does not change the goal of the second saccade. This supports the idea that, by the end of the first saccade, programming of the second saccade is already well underway. We also elicited two-saccade responses (similar to those seen in search) using a double-step task, with the first saccade directed to the initial target step and the second saccade directed to the second target step. If the two saccades are programmed in parallel and programming of each saccade is triggered by one of the two target steps, the second saccade should occur at a relatively fixed time after the onset of the second target step, regardless of the timing of the initial saccade. This prediction was confirmed, supporting the idea that the two saccades are programmed in parallel. Finally, we observed that the shortest inter-saccadic intervals typically followed hypometric initial saccades, suggesting that the initial saccade may have been interrupted by the impending second saccade. Using predictions from physiological studies of interrupted saccades, we tested this hypothesis and found that the hypometric initial saccades did not appear to be interrupted in mid-flight. We discuss the significance of our findings for models of the saccadic system.  相似文献   

20.
Monnier P 《Vision research》2006,46(24):4083-4090
Search performance for targets defined along multiple dimensions was investigated with an accuracy visual search task. Initially, threshold was measured for targets that differed from homogeneous distractors along a single dimension (e.g., a reddish target among achromatic distractors, or a right-tilted target among vertically oriented distractors). Threshold was then measured for a multidimensional target (a redundant target) that differed from homogeneous distractors along two dimensions (e.g., a reddish AND right-tilted target among achromatic, vertically oriented distractors). Search performance for multidimensional target combinations of chromaticity and luminance, chromaticity and orientation, and chromaticity and spatial frequency was tested. Measurements were evaluated within several summation models, allowing for a test of the mechanisms mediating the detection of multidimensional targets in search. Measurements were generally consistent with probability summation suggesting the particular combinations of stimulus dimensions tested were coded along independent, noisy, neural mechanisms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号