首页 | 本学科首页   官方微博 | 高级检索  
检索        


An image-computable model of how endogenous and exogenous attention differentially alter visual perception
Authors:Michael Jigo  David J Heeger  Marisa Carrasco
Institution:aCenter for Neural Science, New York University, New York, NY, 10003;bDepartment of Psychology, New York University, New York, NY, 10003
Abstract:Attention alters perception across the visual field. Typically, endogenous (voluntary) and exogenous (involuntary) attention similarly improve performance in many visual tasks, but they have differential effects in some tasks. Extant models of visual attention assume that the effects of these two types of attention are identical and consequently do not explain differences between them. Here, we develop a model of spatial resolution and attention that distinguishes between endogenous and exogenous attention. We focus on texture-based segmentation as a model system because it has revealed a clear dissociation between both attention types. For a texture for which performance peaks at parafoveal locations, endogenous attention improves performance across eccentricity, whereas exogenous attention improves performance where the resolution is low (peripheral locations) but impairs it where the resolution is high (foveal locations) for the scale of the texture. Our model emulates sensory encoding to segment figures from their background and predict behavioral performance. To explain attentional effects, endogenous and exogenous attention require separate operating regimes across visual detail (spatial frequency). Our model reproduces behavioral performance across several experiments and simultaneously resolves three unexplained phenomena: 1) the parafoveal advantage in segmentation, 2) the uniform improvements across eccentricity by endogenous attention, and 3) the peripheral improvements and foveal impairments by exogenous attention. Overall, we unveil a computational dissociation between each attention type and provide a generalizable framework for predicting their effects on perception across the visual field.

Endogenous and exogenous spatial attention prioritize subsets of visual information and facilitate their processing without concurrent eye movements (13). Selection by endogenous attention is goal-driven and adapts to task demands, whereas exogenous attention transiently and automatically orients to salient stimuli (13). In most visual tasks, both types of attention typically improve visual perception similarly e.g., acuity (46), visual search (7, 8), perceived contrast (911)]. Consequently, models of visual attention do not distinguish between endogenous and exogenous attention (e.g., refs. 1219). However, stark differences also exist. Each attention type differentially modulates neural responses (20, 21) and fundamental properties of visual processing, including temporal resolution (22, 23), texture sensitivity (24), sensory tuning (25), contrast sensitivity (26), and spatial resolution (2734).The effects of endogenous and exogenous attention are dissociable during texture segmentation, a visual task constrained by spatial resolution reviews (13)]. Whereas endogenous attention optimizes spatial resolution to improve the detection of an attended texture (3234), exogenous attention reflexively enhances resolution even when detrimental to perception (2731, 34). Extant models of attention do not explain these well-established effects.Two main hypotheses have been proposed to explain how attention alters spatial resolution. Psychophysical studies ascribe attentional effects to modulations of spatial frequency (SF) sensitivity (30, 33). Neurophysiological (13, 35, 36) and neuroimaging (37, 38) studies bolster the idea that attention modifies spatial profiles of neural receptive fields (RFs) (2). Both hypotheses provide qualitative predictions of attentional effects but do not specify their underlying neural computations.Differences between endogenous and exogenous attention are well established in segmentation tasks and thus provide an ideal model system to uncover their separate roles in altering perception. Texture-based segmentation is a fundamental process of midlevel vision that isolates regions of local structure to extract figures from their background (3941). Successful segmentation hinges on the overlap between the visual system’s spatial resolution and the levels of detail (i.e., SF) encompassed by the texture (39, 41, 42). Consequently, the ability to distinguish between adjacent textures varies as resolution declines toward the periphery (4346). Each attention type differentially alters texture segmentation, demonstrating that their effects shape spatial resolution reviews (13)].Current models of texture segmentation do not explain performance across eccentricity and the distinct modulations by attention. Conventional models treat segmentation as a feedforward process that encodes the elementary features of an image (e.g., SF and orientation), transforms them to reflect the local structure (e.g., regions of similarly oriented bars), and then pools across space to emphasize texture-defined contours (39, 41, 47). Few of these models account for variations in resolution across eccentricity (46, 48, 49) or endogenous (but not exogenous) attentional modulations (18, 50). All others postulate that segmentation is a “preattentive” (42) operation whose underlying neural processing is impervious to attention (39, 41, 4649).Here, we develop a computational model in which feedforward processing and attentional gain contribute to segmentation performance. We augment a conventional model of texture processing (39, 41, 47). Our model varies with eccentricity and includes contextual modulation within local regions in the stimulus via normalization (51), a canonical neural computation (52). The defining characteristic of normalization is that an individual neuron is (divisively) suppressed by the summed activity of neighboring neurons responsive to different aspects of a stimulus. We model attention as multiplicative gains attentional gain factors (15)] that vary with eccentricity and SF. Attention shifts sensitivity toward fine or coarse spatial scales depending on the range of SFs enhanced.Our model is image-computable, which allowed us to reproduce behavior directly from grayscale images used in psychophysical experiments (6, 26, 27, 2933). The model explains three signatures of texture segmentation hitherto unexplained within a single computational framework (Fig. 1): 1) the central performance drop (CPD) (2734, 4346) (Fig. 1A), that is, the parafoveal advantage of segmentation over the fovea; 2) the improvements in the periphery and impairments at foveal locations induced by exogenous attention (2732, 34) (Fig. 1B); and 3) the equivalent improvements across eccentricity by endogenous attention (3234) (Fig. 1C).Open in a separate windowFig. 1.Signatures of texture segmentation. (A) CPD. Shaded region depicts the magnitude of the CPD. Identical axis labels are omitted in B and C. (B) Exogenous attention modulation. Exogenous attention improves segmentation performance in the periphery and impairs it near the fovea. (C) Endogenous attention modulation. Endogenous attention improves segmentation performance across eccentricity.Whereas our analyses focused on texture segmentation, our model is general and can be applied to other visual phenomena. We show that the model predicts the effects of attention on contrast sensitivity and acuity, i.e., in tasks in which both endogenous and exogenous attention have similar or differential effects on performance. To preview our results, model comparisons revealed that normalization is necessary to elicit the CPD and that separate profiles of gain enhancement across SF (26) generate the effects of exogenous and endogenous attention on texture segmentation. A preferential high-SF enhancement reproduces the impairments by exogenous attention due to a shift in visual sensitivity toward details too fine to distinguish the target at foveal locations. The transition from impairments to improvements in the periphery results from exogenous attentional gain gradually shifting to lower SFs that are more amenable for target detection. Improvements by endogenous attention result from a uniform enhancement of SFs that encompass the target, optimizing visual sensitivity for the attended stimulus across eccentricity.
Keywords:computational model  endogenous attention  exogenous attention  spatial resolution  texture
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号