首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 156 毫秒
1.
Galletti C  Fattori P 《Neuropsychologia》2003,41(13):1717-1727
The visual system cannot rely only upon information from the retina to perceive object motion because identical retinal stimulations can be evoked by the movement of objects in the field of view as well as by the movements of retinal images self-evoked by eye movements. We clearly distinguish the two situations, perceiving object motion in the first case and stationarity in the second. The present work deals with the neuronal mechanisms that are likely involved in the detection of real motion. In monkeys, cells that are able to distinguish real from self-induced motion (real-motion cells) are distributed in several cortical areas of the dorsal visual stream. We suggest that the activity of these cells is responsible for motion perception, and hypothesize that these cells are the elements of a cortical network representing an internal map of a stable visual world. Supporting this view are the facts that: (i) the same cortical regions in humans are activated in brain imaging studies during perception of object motion; and (ii) lesions of these same regions produce selective impairments in motion detection, so that patients interpret any retinal image motion as object motion, even when they result from her/his eye movements. Among the areas of the dorsal visual stream rich in real-motion cells, V3A and V6, likely involved in the fast form and motion analyses needed for visual guidance of action, could use real-motion signals to orient the animal's attention towards moving objects, and/or to help grasping them. Areas MT/V5, MST and 7a, known to be involved in the control of pursuit eye movements and in the analysis of visual signals evoked by slow ocular movements, could use real-motion signals to give a proper evaluation of motion during pursuits.  相似文献   

2.
Humans have remarkable abilities in the dexterous use of tools to extend their physical capabilities. Although previous neuropsychological and functional neuroimaging studies have mainly focused on the contribution of frontal?Cparietal cerebral networks to skills for tool use, dense anatomical and functional connections are known to exist between the frontal?Cparietal regions and the lateral cerebellum, suggesting that the cerebellum also supports the information processing necessary for the dexterous use of tools. In this article, we review functional and structural imaging studies reporting that the cerebellum is related to the learning acquisition of neural mechanisms representing input?Coutput properties of controlled objects, including tools. These studies also suggest that such mechanisms are modularly organized in the cerebellum corresponding to the different properties of objects, such as kinematic or dynamic properties and types of tools, and that they enable humans to flexibly cope with discrete changes in objects and environments by reducing interference and combining acquired modules. Based on these studies, we propose a hypothesis that the cerebellum contributes to the skillful use of tools by representing the input?Coutput properties of tools and providing information on the prediction of the sensory consequences of manipulation with the parietal regions, which are related to multisensory processing, and information on the necessary control of tools with the premotor regions, which contribute to the control of hand movements.  相似文献   

3.
Recent cognitive and neuroimaging studies have examined the relationship between perception and action in the context of tools. These studies suggest that tools "potentiate" actions even when overt actions are not required in a task. Tools are unique objects because they have a visual structure that affords action and also a specific functional identity. The present studies investigated the extent to which a tool's representation for action is tied to its graspability or its functional use. Functional magnetic resonance imaging (fMRI) was used to examine the motor representations associated with different classes of graspable objects. Participants viewed and imagined grasping images of 3D tools with handles or neutral graspable shapes. During the viewing task, motor-related regions of cortex (posterior middle temporal gyrus, ventral premotor, and posterior parietal) were associated with tools compared to shapes. During the imagined grasping task, a frontal-parietal-temporal network of activation was seen with both types of objects. However, differences were found in the extent and location of premotor and parietal activation, and additional activation in the middle temporal gyrus and fusiform gyrus for tools compared to shapes. We suggest that the functional identity of graspable objects influences the extent of motor representations associated with them. These results have implications for understanding the interactions between "what" and "how" visual processing systems.  相似文献   

4.
Previous neuroimaging studies have identified a large network of cortical areas involved in semantic processing in the human brain, which includes left occipito-temporal and inferofrontal areas. Most studies, however, investigated exclusively the associative/functional semantic knowledge by using mainly words and/or language related tasks, and this factor may have contributed to the large left hemisphere superiority found in semantic processing and to the controversial involvement of left prefrontal structures. The present study investigates the neural basis of visual objects knowledge, accessed exclusively through pictorial information. Regional cerebral blood flow (rCBF) was assessed using positron emission tomography (PET) during 3 conditions in right-handed normal volunteers: resting with eyes closed, retrieval of semantic information related to visual properties of objects (real size), and visual categorization based on physical properties of the image. Confirming previous experiments and neuropsychological findings, most activations were found in left occipito-temporal areas during retrieval of visual semantic knowledge. The absence of any activation in the left prefrontal inferior cortex for visual semantic processing confirms recent observations which suggest that this region would not be involved in retrieval of visual semantic knowledge from living entities. Rather, such knowledge about visual properties of objects, situated closely to cortical regions mediating perception of the visual attributes, can be retrieved directly from these regions when visual images are used as entry level stimuli.  相似文献   

5.
On the other hand: dummy hands and peripersonal space   总被引:1,自引:0,他引:1  
Where are my hands? The brain can answer this question using sensory information arising from vision, proprioception, or touch. Other sources of information about the position of our hands can be derived from multisensory interactions (or potential interactions) with our close environment, such as when we grasp or avoid objects. The pioneering study of multisensory representations of peripersonal space was published in Behavioural Brain Research almost 30 years ago [Rizzolatti G, Scandolara C, Matelli M, Gentilucci M. Afferent properties of periarcuate neurons in macaque monkeys. II. Visual responses. Behav Brain Res 1981;2:147-63]. More recently, neurophysiological, neuroimaging, neuropsychological, and behavioural studies have contributed a wealth of evidence concerning hand-centred representations of objects in peripersonal space. This evidence is examined here in detail. In particular, we focus on the use of artificial dummy hands as powerful instruments to manipulate the brain's representation of hand position, peripersonal space, and of hand ownership. We also review recent studies of the 'rubber hand illusion' and related phenomena, such as the visual capture of touch, and the recalibration of hand position sense, and discuss their findings in the light of research on peripersonal space. Finally, we propose a simple model that situates the 'rubber hand illusion' in the neurophysiological framework of multisensory hand-centred representations of space.  相似文献   

6.
Visual motion processing and its use for pursuit eye movement control represent a valuable model for studying the use of sensory input for action planning. In psychotic disorders, alterations of visual motion perception have been suggested to cause pursuit eye tracking deficits. We evaluated this system in functional neuroimaging studies of untreated first-episode schizophrenia (N=24), psychotic bipolar disorder patients (N=13) and healthy controls (N=20). During a passive visual motion processing task, both patient groups showed reduced activation in the posterior parietal projection fields of motion-sensitive extrastriate area V5, but not in V5 itself. This suggests reduced bottom-up transfer of visual motion information from extrastriate cortex to perceptual systems in parietal association cortex. During active pursuit, activation was enhanced in anterior intraparietal sulcus and insula in both patient groups, and in dorsolateral prefrontal cortex and dorsomedial thalamus in schizophrenia patients. This may result from increased demands on sensorimotor systems for pursuit control due to the limited availability of perceptual motion information about target speed and tracking error. Visual motion information transfer deficits to higher-level association cortex may contribute to well-established pursuit tracking abnormalities, and perhaps to a wider array of alterations in perception and action planning in psychotic disorders.  相似文献   

7.
Neuronal latencies and the position of moving objects   总被引:6,自引:0,他引:6  
Neuronal latencies delay the registration of the visual signal from a moving object. By the time the visual input reaches brain structures that encode its position, the object has already moved on. Do we perceive the position of a moving object with a delay because of neuronal latencies? Or is there a brain mechanism that compensates for latencies such that we perceive the true position of a moving object in real time? This question has been intensely debated in the context of the flash-lag illusion: a moving object and an object flashed in alignment with it appear to occupy different positions. The moving object is seen ahead of the flash. Does this show that the visual system extrapolates the position of moving objects into the future to compensate for neuronal latencies? Alternative accounts propose that it simply shows that moving and flashed objects are processed with different delays, or that it reflects temporal integration in brain areas that encode position and motion. The flash-lag illusion and the hypotheses put forward to explain it lead to interesting questions about the encoding of position in the brain. Where is the 'where' pathway and how does it work?  相似文献   

8.
While the low-level processes mediating the detection of primary visual attributes are well understood, much less is known about the way in which these attributes are assigned to objects in the visual world. For example, when a region of the retinal image contains multiple motion signals at a range of spatial scales, how do we know whether these signals come from a single object or multiple objects? Here, we present data from four neurological patients on a psychophysical task requiring them to report whether the two components of a plaid pattern appear to move coherently or transparently. The spatial frequency of one component of the plaid is held constant while that of the other is manipulated. While some of the patients perceive coherent motion over a much smaller range of spatial frequencies than normal controls, others report coherence over almost the entire range tested. We discuss the implications of these findings for computational theories of motion perception and higher-level visual processing.  相似文献   

9.
Michels L  Lappe M  Vaina LM 《Neuroreport》2005,16(10):1037-1041
The perception of biological motion combines the analysis of form and motion. However, patient observations by Vaina et al. and psychophysical experiments by Beintema and Lappe showed that humans could perceive human movements (a walker) without local image motion information. Here, we examine the specificity of brain regions responsive to a biological motion stimulus without local image motion, using functional magnetic resonance imaging. We used the stimulus from Beintema and Lappe and compared the brain activity with a point-light display that does contain local motion information and was often used in previous studies. Recent imaging studies have identified areas sensitive to biological motion in both the motion-processing and the form-processing pathways of the visual system. We find a similar neuronal network engaged in biological motion perception, but more strongly manifested in form-processing than in motion-processing areas, namely, fusiform-/occipital face area and extrastriate body area.  相似文献   

10.
Smooth pursuit eye movements (SPEMs) are eye rotations that are used to maintain fixation on a moving target. Such rotations complicate the interpretation of the retinal image, because they nullify the retinal motion of the target, while generating retinal motion of stationary objects in the background. This poses a problem for the oculomotor system, which must track the stabilized target image while suppressing the optokinetic reflex, which would move the eye in the direction of the retinal background motion (opposite to the direction in which the target is moving). Similarly, the perceptual system must estimate the actual direction and speed of moving objects in spite of the confounding effects of the eye rotation. This paper proposes a neural model to account for the ability of primates to accomplish these tasks. The model simulates the neurophysiological properties of cell types found in the superior temporal sulcus of the macaque monkey, specifically the medial superior temporal (MST) region. These cells process signals related to target motion, background motion, and receive an efference copy of eye velocity during pursuit movements. The model focuses on the interactions between cells in the ventral and dorsal subdivisions of MST, which are hypothesized to process target velocity and background motion, respectively. The model explains how these signals can be combined to explain behavioral data about pursuit maintenance and perceptual data from human studies, including the Aubert--Fleischl phenomenon and the Filehne Illusion, thereby clarifying the functional significance of neurophysiological data about these MST cell properties. It is suggested that the connectivity used in the model may represent a general strategy used by the brain in analyzing the visual world.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号