首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Recent studies report efficient vestibular control of goal-directed arm movements during body motion. This contribution tested whether this control relies (a) on an updating process in which vestibular signals are used to update the perceived egocentric position of surrounding objects when body orientation changes, or (b) on a sensorimotor process, i.e. a transfer function between vestibular input and the arm motor output that preserves hand trajectory in space despite body rotation. Both processes were separately and specifically adapted. We then compared the respective influences of the adapted processes on the vestibular control of arm-reaching movements. The rationale was that if a given process underlies a given behavior, any adaptive modification of this process should give rise to observable modification of the behavior. The updating adaptation adapted the matching between vestibular input and perceived body displacement in the surrounding world. The sensorimotor adaptation adapted the matching between vestibular input and the arm motor output necessary to keep the hand fixed in space during body rotation. Only the sensorimotor adaptation significantly altered the vestibular control of arm-reaching movements. Our results therefore suggest that during passive self-motion, the vestibular control of arm-reaching movements essentially derives from a sensorimotor process by which arm motor output is modified on-line to preserve hand trajectory in space despite body displacement. In contrast, the updating process maintaining up-to-date the egocentric representation of visual space seems to contribute little to generating the required arm compensation during body rotations.  相似文献   

2.
We tested between two coding mechanisms that the brain may use to retain distance information about a target for a reaching movement across vergence eye movements. If the brain was to encode a retinal disparity representation (retinal model), i.e., target depth relative to the plane of fixation, each vergence eye movement would require an active update of this representation to preserve depth constancy. Alternatively, if the brain was to store an egocentric distance representation of the target by integrating retinal disparity and vergence signals at the moment of target presentation, this representation should remain stable across subsequent vergence shifts (nonretinal model). We tested between these schemes by measuring errors of human reaching movements (n = 14 subjects) to remembered targets, briefly presented before a vergence eye movement. For comparison, we also tested their directional accuracy across version eye movements. With intervening vergence shifts, the memory-guided reaches showed an error pattern that was based on the new eye position and on the depth of the remembered target relative to that position. This suggests that target depth is recomputed after the gaze shift, supporting the retinal model. Our results also confirm earlier literature showing retinal updating of target direction. Furthermore, regression analyses revealed updating gains close to one for both target depth and direction, suggesting that the errors arise after the updating stage during the subsequent reference frame transformations that are involved in reaching.  相似文献   

3.
Summary In human subjects, we investigated the accuracy of goal-directed arm movements performed without sight of the arm; errors of target localization and of motor control thus remained uncorrected by visual feedback, and became manifest as pointing errors. Target position was provided either as retinal eccentricity or as eye position. By comparing the results to those obtained previously with combined retinal plus extraretinal position cues, the relative contribution of the two signals towards visual localization could be studied. When target position was provided by retinal signals, pointing responses revealed an over-estimation of retinal eccentricity which was of similar size for all eccentricities tested, and was independent of gaze direction. These findings were interpreted as a magnification effect of perifoveal retinal areas. When target position was provided as eye position, pointing was characterized by a substantial inter-, and intra-subject variability, suggesting that the accuracy of localization by extraretinal signals is rather limited. In light of these two qualitatively different deficits, possible mechanisms are discussed how the two signals may interact towards a more veridical visual localization.  相似文献   

4.
Remembered object locations are stored in an eye-fixed reference frame, so that every time the eyes move, spatial representations must be updated for the arm-motor system to reflect the target's new relative position. To date, studies have not investigated how the brain updates these spatial representations during other types of eye movements, such as smooth-pursuit. Further, it is unclear what information is used in spatial updating. To address these questions we investigated whether remembered locations of pointing targets are updated following smooth-pursuit eye movements, as they are following saccades, and also investigated the role of visual information in estimating eye-movement amplitude for updating spatial memory. Misestimates of eye-movement amplitude were induced when participants visually tracked stimuli presented with a background that moved in either the same or opposite direction of the eye before pointing or looking back to the remembered target location. We found that gaze-dependent pointing errors were similar following saccades and smooth-pursuit and that incongruent background motion did result in a misestimate of eye-movement amplitude. However, the background motion had no effect on spatial updating for pointing, but did when subjects made a return saccade, suggesting that the oculomotor and arm-motor systems may rely on different sources of information for spatial updating.  相似文献   

5.
Various cortical and sub-cortical brain structures update the gaze-centered coordinates of remembered stimuli to maintain an accurate representation of visual space across eyes rotations and to produce suitable motor plans. A major challenge for the computations by these structures is updating across eye translations. When the eyes translate, objects in front of and behind the eyes' fixation point shift in opposite directions on the retina due to motion parallax. It is not known if the brain uses gaze coordinates to compute parallax in the translational updating of remembered space or if it uses gaze-independent coordinates to maintain spatial constancy across translational motion. We tested this by having subjects view targets, flashed in darkness in front of or behind fixation, then translate their body sideways, and subsequently reach to the memorized target. Reach responses showed parallax-sensitive updating errors: errors increased with depth from fixation and reversed in lateral direction for targets presented at opposite depths from fixation. In a series of control experiments, we ruled out possible biasing factors such as the presence of a fixation light during the translation, the eyes accompanying the hand to the target, and the presence of visual feedback about hand position. Quantitative geometrical analysis confirmed that updating errors were better described by using gaze-centered than gaze-independent coordinates. We conclude that spatial updating for translational motion operates in gaze-centered coordinates. Neural network simulations are presented suggesting that the brain relies on ego-velocity signals and stereoscopic depth and direction information in spatial updating during self-motion.  相似文献   

6.
The aim of this study was to further understand how the brain represents spatial information for shaping aiming movements to targets. Both behavioral and neurophysiological studies have shown that the brain represents spatial memory for reaching targets in an eye-fixed frame. To date, these studies have only shown how the brain stores and updates target locations for generating a single arm movement. But once a target's location has been computed relative to the hand to program a pointing movement, is that information reused for subsequent movements to the same location? Or is the remembered target location reconverted from eye to motor coordinates each time a pointing movement is made? To test between these two possibilities, we had subjects point twice to the remembered location of a previously foveated target after shifting their gaze to the opposite side of the target site before each pointing movement. When we compared the direction of pointing errors for the second movement to those of the first, we found that errors for each movement varied as a function of current gaze so that pointing endpoints fell on opposite sides of the remembered target site in the same trial. Our results suggest that when shaping multiple pointing movements to the same location the brain does not use information from the previous arm movement such as an arm-fixed representation of the target but instead mainly uses the updated eye-fixed representation of the target to recalculate its location into the appropriate motor frame.  相似文献   

7.
There is considerable evidence that targets for action are represented in a dynamic gaze-centered frame of reference, such that each gaze shift requires an internal updating of the target. Here, we investigated the effect of eye movements on the spatial representation of targets used for position judgements. Participants had their hand passively placed to a location, and then judged whether this location was left or right of a remembered visual or remembered proprioceptive target, while gaze direction was varied. Estimates of position of the remembered targets relative to the unseen position of the hand were assessed with an adaptive psychophysical procedure. These positional judgements significantly varied relative to gaze for both remembered visual and remembered proprioceptive targets. Our results suggest that relative target positions may also be represented in eye-centered coordinates. This implies similar spatial reference frames for action control and space perception when positions are coded relative to the hand.  相似文献   

8.
1. The accuracy with which subjects pointed to targets in extrapersonal space was assessed under a variety of experimental conditions. 2. When subjects pointed in the dark to remembered target locations, they made substantial errors. Errors in distance, measured from the shoulder to the target, were sometimes as much as 15 cm. Errors in direction, also measured from the shoulder, were smaller. 3. An analysis of the information transmitted by the location of the subject's finger about the location of the target showed that the information about the target's distance was consistently lower than the information about its direction. 4. The errors in distance persisted when subjects had their arm in view and pointed in the light to remembered target locations. 5. The errors were much smaller when subjects used a pointer to point to the target or when they were asked to reproduce the position of their finger after it had been passively moved to the target. 6. From these findings we conclude that subjects have a reasonably accurate visual representation of target location and are able to effectively use kinesthetically derived information about target location. We therefore suggest that errors in pointing result from errors in the sensorimotor transformation from the visual representation of the target location to the kinematic representation of the arm movement.  相似文献   

9.
When we fixate an object in space, the rotation centers of the eyes, together with the object, define a plane of regard. People perceive the elevation of objects relative to this plane accurately, irrespective of eye or head orientation (Poljac et al. (2004) Vision Res, in press). Yet, to create a correct representation of objects in space, the orientation of the plane of regard in space is required. Subjects pointed along an eccentric vertical line on a touch screen to the location where their plane of regard intersected the touch screen positioned on their right. The distance of the vertical line to the subjects eyes varied from 10 to 40 cm. Subjects were sitting upright and fixating one of the nine randomly presented directions ranging from 20° left and down to 20° right and up relative to their straight ahead. The eccentricity of fixations relative to the pointing location varied by up to 40°. Subjects underestimated the elevation of their plane of regard (on average by 3.69 cm, SD=1.44 cm), regardless of the fixation direction or pointing distance. However, when the targets were shown on a display mounted in a table, to provide support of the subjects hand throughout the trial, subjects pointed accurately (average error 0.3 cm, SD=0.8 cm). In addition, head tilt 20° to the left or right did not cause any change in accuracy. The bias observed in the first task could be caused by maintained tonus in arm muscles when the arm is raised, that might interfere with the transformation from visual to motor signals needed to perform the pointing movement. We conclude that the plane of regard is correctly localized in space. This may be a good starting point for representing objects in head-centric coordinates.  相似文献   

10.
The aim of this study was to: (1) quantify errors in open-loop pointing toward a spatially central (but retinally peripheral) visual target with gaze maintained in various eccentric horizontal, vertical, and oblique directions; and (2) determine the computational source of these errors. Eye and arm orientations were measured with the use of search coils while six head-fixed subjects looked and pointed toward remembered targets in complete darkness. On average, subjects made small exaggerations in both the vertical and horizontal components of retinal displacement (tending to overshoot the target relative to current gaze), but individual subjects showed considerable variations in this pattern. Moreover, pointing errors for oblique retinal targets were only partially predictable from errors for the cardinal directions, suggesting that most of these errors did not arise within independent vertical and horizontal coordinate channels. The remaining variance was related to nonhomogeneous, direction-dependent distortions in reading out the magnitudes and directions of retinal displacement. The largest and most consistent nonhomogeneities occurred as discontinuities between adjacent points across the vertical meridian of retinotopic space, perhaps related to the break between the representations of space in the left and right cortices. These findings are consistent with the hypothesis that at least some of these visuomotor distortions are due to miscalibrations in quasi-independent visuomotor readout mechanisms for "patches" of retinotopic space, with major discontinuities existing between patches at certain anatomic and/or physiological borders.  相似文献   

11.
Subjects who are in an enclosed chamber rotating at constant velocity feel physically stationary but make errors when pointing to targets. Reaching paths and endpoints are deviated in the direction of the transient inertial Coriolis forces generated by their arm movements. By contrast, reaching movements made during natural, voluntary torso rotation seem to be accurate, and subjects are unaware of the Coriolis forces generated by their movements. This pattern suggests that the motor plan for reaching movements uses a representation of body motion to prepare compensations for impending self-generated accelerative loads on the arm. If so, stationary subjects who are experiencing illusory self-rotation should make reaching errors when pointing to a target. These errors should be in the direction opposite the Coriolis accelerations their arm movements would generate if they were actually rotating. To determine whether such compensations exist, we had subjects in four experiments make visually open-loop reaches to targets while they were experiencing compelling illusory self-rotation and displacement induced by rotation of a complex, natural visual scene. The paths and endpoints of their initial reaching movements were significantly displaced leftward during counterclockwise illusory rotary displacement and rightward during clockwise illusory self-displacement. Subjects reached in a curvilinear path to the wrong place. These reaching errors were opposite in direction to the Coriolis forces that would have been generated by their arm movements during actual torso rotation. The magnitude of path curvature and endpoint errors increased as the speed of illusory self-rotation increased. In successive reaches, movement paths became straighter and endpoints more accurate despite the absence of visual error feedback or tactile feedback about target location. When subjects were again presented a stationary scene, their initial reaches were indistinguishable from pre-exposure baseline, indicating a total absence of aftereffects. These experiments demonstrate that the nervous system automatically compensates in a context-specific fashion for the Coriolis forces associated with reaching movements.  相似文献   

12.
Errors in pointing to actual and remembered targets presented in three-dimensional (3D) space in a dark room were studied under various conditions of visual feedback. During their movements, subjects either had no vision of their arms or of the target, vision of the target but not of their arms, vision of a light-emitting diode (LED) on their moving index fingertip but not of the target, or vision of an LED on their moving index fingertip and of the target. Errors depended critically upon feedback condition. 3D errors were largest for movements to remembered targets without visual feedback, diminished with vision of the moving fingertip, and diminished further with vision of the target and vision of the finger and the target. Moreover, the different conditions differentially influenced the radial distance, azimuth, and elevation errors, indicating that subjects control motion along all three axes relatively independently. The pattern of errors suggest that the neural systems that mediate processing of actual versus remembered targets may have different capacities for integrating visual and proprioceptive information in order to program spatially directed arm movements.  相似文献   

13.
We examined the role of gaze in a task where subjects had to reproduce the position of a remembered visual target with the tip of the index finger, referred to as pointing. Subjects were tested in 3 visual feedback conditions: complete darkness (dark), complete darkness with visual feedback of the finger position (finger), and with vision of a well-defined environment and feedback of the finger position (frame). Pointing accuracy increases with feedback about the finger or visual environment. In the finger and frame conditions, the 95% confidence regions of the variable errors have an ellipsoidal distribution with the main axis oriented toward the subjects' head. During the 1-s period when the target is visible, gaze is almost on target. However, gaze drifts away from the target relative to the subject in the delay period after target disappearance. In the finger and frame conditions, gaze returns toward the remembered target during pointing. In all 3 feedback conditions, the correlations between the variable errors of gaze and pointing position increase during the delay period, reaching highly significant values at the time of pointing. Our results demonstrate that gaze affects the accuracy of pointing. We conclude that the covariance between gaze and pointing position reflects a common drive for gaze and arm movements and an effect of gaze on pointing accuracy at the time of pointing. Previous studies interpreted the orientation of variable errors as indicative for a frame of reference used for pointing. Our results suggest that the orientation of the error ellipses toward the head is at least partly the result of gaze drift in the delay period.  相似文献   

14.
Eye-hand coordination requires the brain to integrate visual information with the continuous changes in eye, head, and arm positions. This is a geometrically complex process because the eyes, head, and shoulder have different centers of rotation. As a result, head rotation causes the eye to translate with respect to the shoulder. The present study examines the consequences of this geometry for planning accurate arm movements in a pointing task with the head at different orientations. When asked to point at an object, subjects oriented their arm to position the fingertip on the line running from the target to the viewing eye. But this eye-target line shifts when the eyes translate with each new head orientation, thereby requiring a new arm pointing direction. We confirmed that subjects do realign their fingertip with the eye-target line during closed-loop pointing across various horizontal head orientations when gaze is on target. More importantly, subjects also showed this head-position-dependent pattern of pointing responses for the same paradigm performed in complete darkness. However, when gaze was not on target, compensation for these translations in the rotational centers partially broke down. As a result, subjects tended to overshoot the target direction relative to current gaze; perhaps explaining previously reported errors in aiming the arm to retinally peripheral targets. These results suggest that knowledge of head position signals and the resulting relative displacements in the centers of rotation of the eye and shoulder are incorporated using open-loop mechanisms for eye-hand coordination, but these translations are best calibrated for foveated, gaze-on-target movements.  相似文献   

15.
This study investigated whether the execution of an accurate pointing response depends on a prior saccade orientation towards the target, independent of the vision of the limb. A comparison was made between the accuracy of sequential responses (in which the starting position of the hand is known and the eye centred on the target prior to the onset of the hand pointing movement) and synergetic responses (where both hand and gaze motions are simultaneously initiated on the basis of unique peripheral retinal information). The experiments were conducted in visual closed-loop (hand visible during the pointing movement) and in visual openloop conditions (vision of hand interrupted as the hand started to move). The latter condition eliminated the possibility of a direct visual evaluation of the error between hand and target during pointing. Three main observations were derived from the present work: (a) the timing of coordinated eye-head-hand pointing at visual targets can be modified, depending on the executed task, without a deterioration in the accuracy of hand pointing; (b) mechanical constraints or instructions such as preventing eye, head or trunk motion, which limit the redundancy of degrees of freedom, lead to a decrease in accuracy; (c) the synergetic movement of eye, head and hand for pointing at a visible target is not trivially the superposition of eye and head shifts added to hand pointing. Indeed, the strategy of such a coordinated action can modify the kinematics of the head in order to make the movements of both head and hand terminate at approximately the same time. The main conclusion is that eye-head coordination is carried out optimally by a parallel processing in which both gaze and hand motor responses are initiated on the basis of a poorly defined retinal signal. The accuracy in hand pointing is not conditioned by head movement per se and does not depend on the relative timing of eye, head and hand movements (synergetic vs sequential responses). However, a decrease in the accuracy of hand pointing was observed in the synergetic condition, when target fixation was not stabilised before the target was extinguished. This suggests that when the orienting saccade reaches the target before hand movement onset, visual updating of the hand motor control signal may occur. A rapid processing of this final input allows a sharper redefinition of the hand landing point.  相似文献   

16.
 We attempt to determine the egocentric reference frame used in directing saccades to remembered targets when landmark-based (exocentric) cues are not available. Specifically, we tested whether memory-guided saccades rely on a retina-centered frame, which must account for eye movements that intervene during the memory period (thereby accumulating error) or on a head-centered representation that requires knowledge of the position of the eyes in the head. We also examined the role of an exocentric reference frame in saccadic targeting since it would not need to account for intervening movements. We measured the precision of eye movements made by human observers to target locations held in memory for a few seconds. A variable number of saccades intervened between the visual presentation of a target and a later eye movement to its remembered location. A visual landmark that allowed for exocentric encoding of the memory target appeared in half the trials. Variable error increased slightly with a greater number of intervening saccades. The landmark aided targeting precision, but did not eliminate the increase in variable error with additional intervening saccades. We interpret these results as evidence for a representation that relies on knowledge of eye position with respect to the head and not one that relies solely on updating in a retina-centered frame. Our results allow us to set an upper bound on the standard deviation of an eye position signal available to the saccadic system during short memory periods at 1.4° for saccades of about 10°. Received: 7 February 1995 / Accepted: 4 October 1996  相似文献   

17.
The purposes of this study were to determine whether gaze direction provides a control signal for movement direction for a pointing task requiring a step and to gain insight into why discrepancies previously identified in the literature for endpoint accuracy with gaze directed eccentrically exist. Straight arm pointing movements were performed to real and remembered target locations, either toward or 30° eccentric to gaze direction. Pointing occurred in normal room lighting or darkness while subjects sat, stood still or side-stepped left or right. Trunk rotation contributed 22–65% to gaze orientations when it was not constrained. Error differences for different target locations explained discrepancies among previous experiments. Variable pointing errors were influenced by gaze direction, while mean systematic pointing errors and trunk orientations were influenced by step direction. These data support the use of a control strategy that relies on gaze direction and equilibrium inputs for whole-body goal-directed movements.  相似文献   

18.
This study aimed to investigate the coordination of multiple control actions involved in human horizontal gaze orienting or arm pointing to a common visual target. The subjects performed a visually triggered reaction time task in three conditions: (1) gaze orienting with a combined eye saccade and head rotation (EH), (2) arm pointing with gaze orienting by an eye saccade without head rotation (EA), and (3) arm pointing with gaze orienting by a combined eye saccade and head rotation (EHA). The subjects initiated eye movement first with nearly constant latencies across all tasks, followed by head movement in the EH task, by arm movement in the EA task, and by head and then arm movements in the EHA task. The differences of onset times between eye and head movements in the EH task, and between eye and arm movements in the EA task, were both preserved in the EHA task, leading to an eye-to-head-to-arm sequence. The onset latencies of eye and head in the EH task, eye and arm in the EA task, and eye, head and arm in the EHA task, were all positively correlated on a trial-by-trial basis. In the EHA task, however, the correlation coefficients of eye–head coupling and of eye–arm coupling were reduced and increased, respectively, compared to those estimated in the two-effector conditions (EH, EA). These results suggest that motor commands for different motor effectors are linked differently to achieve coordination in a task-dependent manner.  相似文献   

19.
The principal goal of our study is to gain an insight into the representation of peripersonal space. Two different experiments were conducted in this study. In the first experiment, subjects were asked to represent principal anatomical reference planes by drawing ellipses in the sagittal, frontal and horizontal planes. The three-dimensional hand-drawing movements, which were achieved with and without visual guidance, were considered as the expression of a cognitive process per se: the peripersonal space representation for action. We measured errors in the spatial orientation of ellipses with regard to the requested reference planes. For ellipses drawn without visual guidance, with eyes open and eyes closed, orientation errors were related to the reference planes. Errors were minimal for sagittal and maximal for horizontal plane. These disparities in errors were considerably reduced when subjects drew using a visual guide. These findings imply that different planes are centrally represented, and are characterized, by different errors when subjects use a body-centered frame for performing the movement and suggest that the representation of peripersonal space may be anisotropic. However, this representation can be modified when subjects use an environment-centered reference frame to produce the movement. In the second experiment, subjects were instructed to represent, with eyes open and eyes closed, sagittal, frontal and horizontal planes by pointing to virtual targets located in these planes. Disparities in orientation errors measured for pointing were similar to those found for drawing, implying that the sensorimotor representation of reference planes was not constrained by the type of motor tasks. Moreover, arm postures measured at pointing endpoints and at comparable spatial locations in drawing are strongly correlated. These results suggest that similar patterns of errors and arm posture correlation, for drawing and pointing, can be the consequence of using a common space representation and reference frame. These findings are consistent with the assumption of an anisotropic action-related representation of peripersonal space when the movement is performed in a body-centered frame.  相似文献   

20.
Encoding of visual target location in extrapersonal space requires convergence of at least three types of information: retinal signals, information about orbital eye positions, and the position of the head on the body. Since the position of gaze is the sum of the head position and the eye position, inaccuracy of spatial localization of the target may result from the sum of the corresponding three levels of errors: retina, ocular and head. In order to evaluate the possible errors evoked at each level, accuracy of target encoding was assessed through a motor response requiring subjects to point with the hand towards a target seen under foveal vision, eliminating the retinal source of error. Subjects had first to orient their head to one of three positions to the right (0, 40, 80°) and maintain this head position while orienting gaze and pointing to one of five target positions (0, 20, 40, 60, 80°). This resulted in 11 combinations of static head and eye positions, and corresponded to five different gaze eccentricities. The accuracy of target pointing was tested without vision of the moving hand. Six subjects were tested. No systematic bias in finger pointing was observed for eye positions ranging from 0 to 40° to the right or left within the orbit. However, the variability (as measured by a surface error) given by the scatter of hand pointing increased quadratically with eye eccentricity. A similar observation was made with the eye centreed and the head position ranging from 0 to 80°, although the surface error increased less steeply with eccentricity. Some interaction between eye and head eccentricity also contributed to the pointing error. These results suggest that pointing should be most accurate with a head displacement corresponding to 90% of the gaze eccentricity. These results explain the systematic hypometry of head orienting towards targets observed under natural conditions: thus the respective contribution of head and eye to gaze orientation might be determined in order to optimize accuracy of target encoding.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号