首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
Atkins JE  Jacobs RA  Knill DC 《Vision research》2003,43(25):2603-2613
We studied the hypothesis that observers can recalibrate their visual percepts when visual and haptic (touch) cues are discordant and the haptic information is judged to be reliable. Using a novel visuo-haptic virtual reality environment, we conducted a set of experiments in which subjects interacted with scenes consisting of two fronto-parallel surfaces. Subjects judged the distance between the two surfaces based on two perceptual cues: a visual stereo cue obtained when viewing the scene binocularly and a haptic cue obtained when subjects grasped the two surfaces between their thumb and index fingers. Visual and haptic cues regarding the scene were manipulated independently so that they could either be consistent or inconsistent. Experiment 1 explored the effect of visuo-haptic inconsistencies on depth-from-stereo estimates. Our findings suggest that when stereo and haptic cues are inconsistent, subjects recalibrate their interpretations of the visual stereo cue so that depth-from-stereo percepts are in greater agreement with depth-from-haptic percepts. In Experiment 2 the visuo-haptic discrepancy took a different form when the two surfaces were near the subject than when they were far from the subject. The results indicate that subjects recalibrated their interpretations of the stereo cue in a context-sensitive manner that depended on viewing distance, thereby making them more consistent with depth-from-haptic estimates at all viewing distances. Together these findings suggest that observers' visual and haptic percepts are tightly coupled in the sense that haptic percepts provide a standard to which visual percepts can be recalibrated when the visual percepts are deemed to be erroneous.  相似文献   

2.
Experience-dependent integration of texture and motion cues to depth   总被引:3,自引:0,他引:3  
Jacobs RA  Fine I 《Vision research》1999,39(24):4062-4075
Previous investigators have shown that observers' visual cue combination strategies are remarkably flexible in the sense that these strategies adapt on the basis of the estimated reliabilities of the visual cues. However, these researchers have not addressed how observers' acquire these estimated reliabilities. This article studies observers' abilities to learn cue combination strategies. Subjects made depth judgments about simulated cylinders whose shapes were indicated by motion and texture cues. Because the two cues could indicate different shapes, it was possible to design tasks in which one cue provided useful information for making depth judgments, whereas the other cue was irrelevant. The results of experiment 1 suggest that observers' cue combination strategies are adaptable as a function of training; subjects adjusted their cue combination rules to use a cue more heavily when the cue was informative on a task versus when the cue was irrelevant. Experiment 2 demonstrated that experience-dependent adaptation of cue combination rules is context-sensitive. On trials with presentations of short cylinders, one cue was informative, whereas on trials with presentations of tall cylinders, the other cue was informative. The results suggest that observers can learn multiple cue combination rules, and can learn to apply each rule in the appropriate context. Experiment 3 demonstrated a possible limitation on the context-sensitivity of adaptation of cue combination rules. One cue was informative on trials with presentations of cylinders at a left oblique orientation, whereas the other cue was informative on trials with presentations of cylinders at a right oblique orientation. The results indicate that observers did not learn to use different cue combination rules in different contexts under these circumstances. These results are consistent with the hypothesis that observers' visual systems are biased to learn to perceive in the same way views of bilaterally symmetric objects that differ solely by a symmetry transformation. Taken in conjunction with the results of Experiment 2, this means that the visual learning mechanism underlying cue combination adaptation is biased such that some sets of statistics are more easily learned than others.  相似文献   

3.
Long-lasting perceptual biases can be acquired through training in cue recruitment experiments (e.g. Backus, 2011, Haijiang et al., 2006). Stimuli in previous studies contained motion, so the learning could be explained as an idiosyncrasy in some specific neuronal population such as the middle temporal (MT) area (Harrison & Backus, 2010a). The current study addresses the generality of cue recruitment by testing whether motion is necessary for learning a cue-contingent perceptual bias. We tested whether location and a novel cue, surface texture, would be recruited as cues to disambiguate perceptually bistable stationary 3-D shapes. In Experiment 1, stereo and luminance cues were used to disambiguate shape according to location in the visual field, and observers’ (N = 10) percepts on ambiguous test trials became biased in favor of the contingency during training. This bias lasted into the following day. This result together with previous studies that used moving stimuli suggests that location-contingent biases are easily learned by the visual system. In Experiment 2, location was fixed, and instead the new cue to be recruited was a surface texture. Learning did not occur when stimuli were para-foveal, texture was task-irrelevant, and disparity was continuously present in training stimuli (N = 10). However, learning did occur when stimuli were central, task was texture-relevant, and disparity was transient (N = 8). Thus, we show for the first time that an abstract cue, surface texture, can also be learned without motion.  相似文献   

4.
The visual system can use various cues to segment the visual scene into figure and background. We studied how human observers combine two of these cues, texture and color, in visual segmentation. In our task, the observers identified the orientation of an edge that was defined by a texture difference, a color difference, or both (cue combination). In a fourth condition, both texture and color information were available, but the texture and color edges were not spatially aligned (cue conflict). Performance markedly improved when the edges were defined by two cues, compared to the single-cue conditions. Observers only benefited from the two cues, however, when they were spatially aligned. A simple signal-detection model that incorporates interactions between texture and color processing accounts for the performance in all conditions. In a second experiment, we studied whether the observers are able to ignore a task-irrelevant cue in the segmentation task or whether it interferes with performance. Observers identified the orientation of an edge defined by one cue and were instructed to ignore the other cue. Three types of trial were intermixed: neutral trials, in which the second cue was absent; congruent trials, in which the second cue signaled the same edge as the target cue; and conflict trials, in which the second cue signaled an edge orthogonal to the target cue. Performance improved when the second cue was congruent with the target cue. Performance was impaired when the second cue was in conflict with the target cue, indicating that observers could not discount the second cue. We conclude that texture and color are not processed independently in visual segmentation.  相似文献   

5.
Visual cue integration strategies are known to depend on cue reliability and how rapidly the visual system processes incoming information. We investigated whether these strategies also depend on differences in the information demands for different natural tasks. Using two common goal-oriented tasks, prehension and object placement, we determined whether monocular and binocular information influence estimates of three-dimensional (3D) orientation differently depending on task demands. Both tasks rely on accurate 3D orientation estimates, but 3D position is potentially more important for grasping. Subjects placed an object on or picked up a disc in a virtual environment. On some trials, the monocular cues (aspect ratio and texture compression) and binocular cues (e.g., binocular disparity) suggested slightly different 3D orientations for the disc; these conflicts either were present upon initial stimulus presentation or were introduced after movement initiation, which allowed us to quantify how information from the cues accumulated over time. We analyzed the time-varying orientations of subjects' fingers in the grasping task and those of the object in the object placement task to quantify how different visual cues influenced motor control. In the first experiment, different subjects performed each task, and those performing the grasping task relied on binocular information more when orienting their hands than those performing the object placement task. When subjects in the second experiment performed both tasks in interleaved sessions, binocular cues were still more influential during grasping than object placement, and the different cue integration strategies observed for each task in isolation were maintained. In both experiments, the temporal analyses showed that subjects processed binocular information faster than monocular information, but task demands did not affect the time course of cue processing. How one uses visual cues for motor control depends on the task being performed, although how quickly the information is processed appears to be task invariant.  相似文献   

6.
Weighted linear cue combination with possibly correlated error   总被引:5,自引:0,他引:5  
Oruç I  Maloney LT  Landy MS 《Vision research》2003,43(23):2451-2468
We test hypotheses concerning human cue combination in a slant estimation task. Observers repeatedly adjusted the slant of a plane to 75 degrees. Feedback was provided after each setting and the observers trained extensively until their setting error stabilized. The slant of the plane was defined by either linear perspective alone (a grid of lines) or texture gradient alone (diamond-shaped texture elements) or the two cues together. We chose a High and Low variance version of each cue type and measured setting variability in four single-cue conditions (Low, High for each cue) and in the four possible combined-cue conditions (Low-Low, Low-High, etc.). We compared performance in the combined-cue conditions to predictions based on single-cue performance. The results were consistent with a linear combination of estimates from cues. Six out of eight observers did better with combined cues than with either cue alone. For three observers, performance was consistent with optimal combination of uncorrelated cues. Three other observers' results were also consistent with optimal combination, but with the assumption that internal cue estimates were correlated. The remaining two observers were consistent with sub-optimal cue combination.  相似文献   

7.
Different types of texture produce differences in slant-discrimination performance (P. Rosas, F. A. Wichmann, & J. Wagemans, 2004). Under the assumption that the visual system is sensitive to the reliability of different depth cues (M. O. Ernst & M. S. Banks, 2002; L. T. Maloney & M. S. Landy, 1989), it follows that the texture type should affect the influence of the texture cue in depth-cue combination. We tested this prediction by combining different texture types with object motion in a slant-discrimination task in two experiments. First, we used consistent cues to observe whether our subjects behaved as linearly combining independent estimates from texture and motion in a statistical optimal fashion (M. O. Ernst & M. S. Banks, 2002). Only 4% of our results were consistent with such an optimal combination of uncorrelated estimates, whereas about 46% of the data were consistent with an optimal combination of correlated estimates from cues. Second, we measured the weights for the texture and motion cues using perturbation analysis. The results showed a large influence of the motion cue and an increasing weight for the texture cue for larger slants. However, in general, the texture weights did not follow the reliability of the textures. Finally, we fitted the correlation coefficients of estimates individually for each texture, motion condition, and observer. This allows us to fit our data from both experiments to an optimal cue combination model with correlated estimates, but inspection of the fitted parameters shows no clear, psychophysically interpretable pattern. Furthermore, the fitted motion thresholds as a function of texture type are correlated with the slant thresholds as a function of texture type. One interpretation of such a finding is a strong coupling of cues.  相似文献   

8.
The phenomenon of crossmodal dynamic visual capture occurs when the direction of motion of a visual cue causes a weakening or reversal of the perceived direction of motion of a concurrently presented auditory stimulus. It is known that there is a perceptual bias towards looming compared to receding stimuli, and faster bimodal reaction times have recently been observed for looming cues compared to receding cues (Cappe et al., 2009). The current studies aimed to test whether visual looming cues are associated with greater dynamic capture of auditory motion in depth compared to receding signals. Participants judged the direction of an auditory motion cue presented with a visual looming cue (expanding disk), a visual receding cue (contracting disk), or visual stationary cue (static disk). Visual cues were presented either simultaneously with the auditory cue, or after 500 ms. We found increased levels of interference with looming visual cues compared to receding visual cues, compared to asynchronous presentation or stationary visual cues. The results could not be explained by the weaker subjective strength of the receding auditory stimulus, as in Experiment 2 the looming and receding auditory cues were matched for perceived strength. These results show that dynamic visual capture of auditory motion in the depth plane is modulated by an adaptive bias for looming compared to receding visual cues.  相似文献   

9.
Knill DC 《Vision research》1998,38(17):2635-2656
Optical texture patterns contain three quasi-independent cues to planar surface orientation: perspective scaling, projective foreshortening and density. The purpose of this work was to estimate the perceptual weights assigned to these texture cues for discriminating surface orientation and to measure the visual system's reliance on an isotropy assumption in interpreting foreshortening information. A novel analytical technique is introduced which takes advantage of the natural cue perturbations inherent in stochastic texture stimuli to estimate cue weights and measure the influence of an isotropy assumption. Ideal observers were derived which compute the exact information content of the different texture cues in the stimuli used in the experiments and which either did or did not rely on an assumption of surface texture isotropy. Simulations of the ideal observers using the same stimuli shown to subjects in a slant discrimination task provided trial-by-trial estimates of the natural cue perturbations which were inherent in the stimuli. By back-correlating subjects' judgments with the different ideal observer estimates, we were able to estimate both the weights given to each cue by subjects and the strength of subjects' prior assumptions of isotropy. In all of the conditions tested, we found that subjects relied primarily on the foreshortening cue. A small, but significant weight was given to scaling information and no significant weight was given to density information. In conditions in which the surface textures deviated from isotropy by random amounts from stimulus to stimulus, subject judgements correlated well with the estimates of an ideal observer which incorrectly assumed surface texture isotropy. This correlation was not complete, however, suggesting that a soft form of the isotropy constraint was used. Moreover, the correlation was significantly lower for textures containing higher-order information about surface orientation (skew of rectangular texture elements). The results of the analysis clearly implicate texture foreshortening as a primary cue for perceiving surface slant from texture and suggest that the visual system incorporates a strong, though not complete, bias to interpret surface textures as isotropic in its inference of surface slant from texture. They further suggest that local texture skew, when available in an image, contributes significantly to perceptual estimates of surface orientation.  相似文献   

10.
In an immersive virtual reality environment, subjects fail to notice when a scene expands or contracts around them, despite correct and consistent information from binocular stereopsis and motion parallax, resulting in gross failures of size constancy (A. Glennerster, L. Tcheang, S. J. Gilson, A. W. Fitzgibbon, & A. J. Parker, 2006). We determined whether the integration of stereopsis/motion parallax cues with texture-based cues could be modified through feedback. Subjects compared the size of two objects, each visible when the room was of a different size. As the subject walked, the room expanded or contracted, although subjects failed to notice any change. Subjects were given feedback about the accuracy of their size judgments, where the "correct" size setting was defined either by texture-based cues or (in a separate experiment) by stereo/motion parallax cues. Because of feedback, observers were able to adjust responses such that fewer errors were made. For texture-based feedback, the pattern of responses was consistent with observers weighting texture cues more heavily. However, for stereo/motion parallax feedback, performance in many conditions became worse such that, paradoxically, biases moved away from the point reinforced by the feedback. This can be explained by assuming that subjects remap the relationship between stereo/motion parallax cues and perceived size or that they develop strategies to change their criterion for a size match on different trials. In either case, subjects appear not to have direct access to stereo/motion parallax cues.  相似文献   

11.
The visual system continuously integrates multiple sensory cues to help plan and control everyday motor tasks. We quantified how subjects integrated monocular cues (contour and texture) and binocular cues (disparity and vergence) about 3D surface orientation throughout an object placement task and found that binocular cues contributed more to online control than planning. A temporal analysis of corrective responses to stimulus perturbations revealed that the visuomotor system processes binocular cues faster than monocular cues. This suggests that binocular cues dominated online control because they were available sooner, thus affecting a larger proportion of the movement. This was consistent with our finding that the relative influence of binocular information was higher for short-duration movements than long-duration movements. A motor control model that optimally integrates cues with different delays accounts for our findings and shows that cue integration for motor control depends in part on the time course of cue processing.  相似文献   

12.
We examined learning at multiple levels of the visual system. Subjects were trained and tested on a same/different slant judgment task or a same/different curvature judgment task using simulated planar surfaces or curved surfaces defined by either stereo or monocular (texture and motion) cues. Taken as a whole, the results of four experiments are consistent with the hypothesis that learning takes place at both cue-dependent and cue-invariant levels, and that learning at these levels can have different generalization properties. If so, then cue-invariant mechanisms may mediate the transfer of learning from familiar cue conditions to novel cue conditions, thereby allowing perceptual learning to be robust and efficient. We claim that learning takes place at multiple levels of the visual system, and that a comprehensive understanding of visual perception requires a good understanding of learning at each of these levels.  相似文献   

13.
How does the visual system combine information from different depth cues to estimate three-dimensional scene parameters? We tested a maximum-likelihood estimation (MLE) model of cue combination for perspective (texture) and binocular disparity cues to surface slant. By factoring the reliability of each cue into the combination process, MLE provides more reliable estimates of slant than would be available from either cue alone. We measured the reliability of each cue in isolation across a range of slants and distances using a slant-discrimination task. The reliability of the texture cue increases as |slant| increases and does not change with distance. The reliability of the disparity cue decreases as distance increases and varies with slant in a way that also depends on viewing distance. The trends in the single-cue data can be understood in terms of the information available in the retinal images and issues related to solving the binocular correspondence problem. To test the MLE model, we measured perceived slant of two-cue stimuli when disparity and texture were in conflict and the reliability of slant estimation when both cues were available. Results from the two-cue study indicate, consistent with the MLE model, that observers weight each cue according to its relative reliability: Disparity weight decreased as distance and |slant| increased. We also observed the expected improvement in slant estimation when both cues were available. With few discrepancies, our data indicate that observers combine cues in a statistically optimal fashion and thereby reduce the variance of slant estimates below that which could be achieved from either cue alone. These results are consistent with other studies that quantitatively examined the MLE model of cue combination. Thus, there is a growing empirical consensus that MLE provides a good quantitative account of cue combination and that sensory information is used in a manner that maximizes the precision of perceptual estimates.  相似文献   

14.
《Vision research》2012,52(23-24):2509-2516
The effect of visual experience is usually investigated through active (task dependent) training in a discrimination task. In contrast, the current work explored the psychophysical and electrophysiological correlates of passive (task independent) visual experience in texture segmentation by using an inattentional blindness-like paradigm (Mack et al., 1992). The psychophysical and electrophysiological responses to a segmented line-texture bar, with texture elements oriented either congruently (parallel) or noncongruently (orthogonal) to bar orientation, were collected after both short and long passive experience, with the texture presented on the background while subjects performed a primary task.Subjects were not able to distinguish the orientation of the bar (psychophysical results) after either short or long passive experience. However, the short experience produced an electrophysiological correlate of texture segmentation (N150), and the amplitude of this component was greater for the parallel bar, demonstrating that it reflected not simply local orientation discontinuities but also texture boundary–surface orientation congruency. This configurational effect in texture segmentation, which occurred without awareness during passive viewing, disappeared when the subjects had previously discriminated the orientation of the bar and when experience was lengthened, probably as a consequence of adaptation. Our study provides the first ERP evidence that boundary–surface relations are available during short passive visual experiences of very salient texture images and are suppressed by long experience, probably because of adaptation.  相似文献   

15.
Guzzon D  Casco C 《Vision research》2011,51(23-24):2509-2516
The effect of visual experience is usually investigated through active (task dependent) training in a discrimination task. In contrast, the current work explored the psychophysical and electrophysiological correlates of passive (task independent) visual experience in texture segmentation by using an inattentional blindness-like paradigm (Mack et al., 1992). The psychophysical and electrophysiological responses to a segmented line-texture bar, with texture elements oriented either congruently (parallel) or noncongruently (orthogonal) to bar orientation, were collected after both short and long passive experience, with the texture presented on the background while subjects performed a primary task. Subjects were not able to distinguish the orientation of the bar (psychophysical results) after either short or long passive experience. However, the short experience produced an electrophysiological correlate of texture segmentation (N150), and the amplitude of this component was greater for the parallel bar, demonstrating that it reflected not simply local orientation discontinuities but also texture boundary-surface orientation congruency. This configurational effect in texture segmentation, which occurred without awareness during passive viewing, disappeared when the subjects had previously discriminated the orientation of the bar and when experience was lengthened, probably as a consequence of adaptation. Our study provides the first ERP evidence that boundary-surface relations are available during short passive visual experiences of very salient texture images and are suppressed by long experience, probably because of adaptation.  相似文献   

16.
Most laboratory visual search tasks involve many searches for the same target, while in the real world we typically change our target with each search (e.g. find the coffee cup, then the sugar). How quickly can the visual system be reconfigured to search for a new target? Here observers searched for targets specified by cues presented at different SOAs relative to the search stimulus. Search for different targets on each trial was compared to search for the same target over a block of trials. Experiments 1 and 2 showed that an exact picture cue acts within 200 ms to make varied target conjunction search as fast and efficient as blocked conjunction search. Word cues were slower and never as effective. Experiment 3 replicated this result with a task that required top-down information about target identity. Experiment 4 showed that the effects of an exact picture cue were not mandatory. Experiments 5 and 6 used pictures of real objects to cue targets by category level.  相似文献   

17.
A number of studies have demonstrated that people often integrate information from multiple perceptual cues in a statistically optimal manner when judging properties of surfaces in a scene. For example, subjects typically weight the information based on each cue to a degree that is inversely proportional to the variance of the distribution of a scene property given a cue's value. We wanted to determine whether subjects similarly use information about the reliabilities of arbitrary low-level visual features when making image-based discriminations, as in visual texture discrimination. To investigate this question, we developed a modification of the classification image technique and conducted two experiments that explored subjects' discrimination strategies using this improved technique. We created a basis set consisting of 20 low-level features and created stimuli by linearly combining the basis vectors. Subjects were trained to discriminate between two prototype signals corrupted with Gaussian feature noise. When we analyzed subjects' classification images over time, we found that they modified their decision strategies in a manner consistent with optimal feature integration, giving greater weight to reliable features and less weight to unreliable features. We conclude that optimal integration is not a characteristic specific to conventional visual cues or to judgments involving three-dimensional scene properties. Rather, just as researchers have previously demonstrated that people are sensitive to the reliabilities of conventionally defined cues when judging the depth or slant of a surface, we demonstrate that they are likewise sensitive to the reliabilities of arbitrary low-level features when making image-based discriminations.  相似文献   

18.
Bindemann M  Burton AM 《Vision research》2008,48(25):2555-2561
When faces are turned upside-down, many aspects of face processing are severely disrupted. Here we report an instance where this face inversion effect is not found. In a visual cueing paradigm an inverted face was paired with an inverted object in a cue display, followed by a target in one of the cue locations (Experiment 1). Responses were faster to face-cued targets, indicating an attention bias for inverted faces. When upright and inverted face cues were paired in Experiment 2, no attention bias for either cue type was found, suggesting that attention was drawn equally to both types of stimuli. Despite this, attention could be biased selectively toward upright or inverted faces in Experiment 3, by manipulating the predictiveness of either type of cue, which shows that observers can distinguish upright and inverted faces under these conditions. A fourth experiment provided a replication of Experiment 2 with an extended stimulus set and increased task demands. These findings suggest that visual attributes that can influence the allocation of an observer’s attention to faces are available in both upright and inverted orientations.  相似文献   

19.
Mamassian P  Landy MS 《Vision research》2001,41(20):2653-2668
The visual system relies on two types of information to interpret a visual scene: the cues that can be extracted from the retinal images and prior constraints that are used to disambiguate the scene. Many studies have looked at how multiple visual cues are combined. We examined the interaction of multiple prior constraints. The particular constraints studied here are assumptions the observer makes concerning the location of the light source (for the shading cue to depth) and the orientation of a surface (for depth based on image contours). The reliability of each of the two cues was manipulated by changing the contrast of different parts of the stimuli. We developed a model based on elements of Bayesian decision theory that permitted us to track the weights applied to each of the prior constraints as a function of the cue reliabilities. The results provided evidence that prior constraints behave just like visual cues to depth: cues with more reliable information have higher weight attributed to their corresponding prior constraint.  相似文献   

20.
In the present study, we examined whether perceptual learning methods can be used to improve performance of older individuals. Subjects performed a texture discrimination task in the peripheral visual field and a letter discrimination task in central vision. The SOA threshold was derived by presenting a mask following the stimuli. Older subjects (age greater than 65 years) were either trained for 2 days using near threshold stimuli (experimental group) or were trained with the task with supra-threshold stimuli (older control group). The experimental group showed significant improvement in the task as a result of training whereas the older control group showed no significant improvement. The improved performance post-training equaled that of a younger control group and was maintained for at least 3 months. The results of two additional experiments indicate that the improved performance was not due to changes in divided attention, that the effect of perceptual learning was location specific, and that the pattern of learning was similar to that of younger subjects. These results indicate that perceptual learning with near threshold training can be used to improve visual performance among older individuals, that the improvements are not the result of practice with the visual task, and that the improvements do not transfer to non-trained locations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号