首页 | 本学科首页   官方微博 | 高级检索  
     


From the Cover: Reconstructing representations of dynamic visual objects in early visual cortex
Authors:Edmund Chong  Ariana M. Familiar  Won Mok Shim
Affiliation:aNew York University Neuroscience Institute, New York University School of Medicine, New York, NY, 10016;;bDepartment of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755
Abstract:As raw sensory data are partial, our visual system extensively fills in missing details, creating enriched percepts based on incomplete bottom-up information. Despite evidence for internally generated representations at early stages of cortical processing, it is not known whether these representations include missing information of dynamically transforming objects. Long-range apparent motion (AM) provides a unique test case because objects in AM can undergo changes both in position and in features. Using fMRI and encoding methods, we found that the “intermediate” orientation of an apparently rotating grating, never presented in the retinal input but interpolated during AM, is reconstructed in population-level, feature-selective tuning responses in the region of early visual cortex (V1) that corresponds to the retinotopic location of the AM path. This neural representation is absent when AM inducers are presented simultaneously and when AM is visually imagined. Our results demonstrate dynamic filling-in in V1 for object features that are interpolated during kinetic transformations.Contrary to our seamless and unobstructed perception of visual objects, raw sensory data are often partial and impoverished. Thus, our visual system regularly fills in extensive details to create enriched representations of visual objects (1, 2). A growing body of evidence suggests that “filled-in” visual features of an object are represented at early stages of cortical processing where physical input is nonexistent. For example, increased activity in early visual cortex (V1) was found in retinotopic locations corresponding to nonstimulated regions of the visual field during the perception of illusory contours (3, 4) and color filling-in (5). Furthermore, recent functional magnetic resonance imaging (fMRI) studies using multivoxel pattern analysis (MVPA) methods show how regions of V1 lacking stimulus input can contain information regarding objects or scenes presented at other locations in the visual field (6, 7), held in visual working memory (8, 9), or used in mental imagery (1013).Although these studies have found evidence for internally generated representations of static stimuli in early cortical processing, the critical question remains of whether and how interpolated visual feature representations are reconstructed in early cortical processing while objects undergo kinetic transformations, a situation that is more prevalent in our day-to-day perception.To address this question, we examined the phenomenon of long-range apparent motion (AM): when a static stimulus appears at two different locations in succession, a smooth transition of the stimulus across the two locations is perceived (1416). Previous behavioral studies have shown that subjects perceive illusory representations along the AM trajectory (14, 17) and that these representations can interfere with the perception of physically presented stimuli on the AM path (1821). In line with this behavioral evidence, it was found that the perception of AM leads to increased blood oxygen level-dependent (BOLD) response in the region of V1 retinotopically mapped to the AM path (2225), suggesting the involvement of early cortical processing. This activation increase induced by the illusory motion trace was also confirmed in neurophysiological investigations on ferrets and mice using voltage-sensitive dye (VSD) imaging (26, 27). Despite these findings, however, a crucial question about the information content of the AM-induced signal remains unsolved: whether and how visual features of an object engaged in AM are reconstructed in early retinotopic cortex.Using fMRI and a forward-encoding model (2831), we examined whether content-specific representations of the intermediate state of a dynamic object engaged in apparent rotation could be reconstructed from the large-scale, population-level, feature-tuning responses in the nonstimulated region of early retinotopic cortex representing the AM path. To dissociate signals linked to high-level interpretations of the stimulus (illusory object features interpolated in motion) from those associated with the bottom-up stimulus input (no retinal input on the path) generating the perception of motion, we used rotational AM, which produces intermediate features that are different from the features of the physically present AM-inducing stimuli (transitional AM). We further probed the nature of such AM-induced feature representations by comparing feature-tuning profiles of the AM path in V1 with those evoked when visually imagining the AM stimuli. Our findings suggest intermediate visual features of dynamic objects, which are not present anywhere in the retinal input, are reconstructed in V1 during kinetic transformations via feedback processing. This result indicates, for the first time to our knowledge, that internally reconstructed representations of dynamic objects in motion are instantiated by retinotopically organized population-level, feature-tuning responses in V1.
Keywords:apparent motion   filling-in   dynamic interpolation   feedback   V1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号