首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 68 毫秒
1.
We have investigated pointing movements toward remembered targets after an intervening self-generated body movement. We tested to what extent visual information about the environment or finger position is used in updating target position relative to the body after a step and whether gaze plays a role in the accuracy of the pointing movement. Subjects were tested in three visual conditions: complete darkness (DARK), complete darkness with visual feedback of the finger (FINGER), and with vision of a well-defined environment and with feedback of the finger (FRAME). Pointing accuracy was rather poor in the FINGER and DARK conditions, which did not provide vision of the environment. Constant pointing errors were mainly in the direction of the step and ranged from about 10 to 20 cm. Differences between binocular fixation and target position were often related to the step size and direction. At the beginning of the trial, when the target was visible, fixation was on target. After target extinction, fixation moved away from the target relative to the subject. The variability in the pointing positions appeared to be related to the variable errors in fixation, and the co-variance increases during the delay period after the step, reaching a highly significant value at the time of pointing. The significant co-variance between fixation position and pointing is not the result of a mutual dependence on the step, since we corrected for any direct contributions of the step in both signals. We conclude that the co-variance between fixation and pointing position reflects 1) a common command signal for gaze and arm movements and 2) an effect of fixation on pointing accuracy at the time of pointing.  相似文献   

2.
Encoding of visual target location in extrapersonal space requires convergence of at least three types of information: retinal signals, information about orbital eye positions, and the position of the head on the body. Since the position of gaze is the sum of the head position and the eye position, inaccuracy of spatial localization of the target may result from the sum of the corresponding three levels of errors: retina, ocular and head. In order to evaluate the possible errors evoked at each level, accuracy of target encoding was assessed through a motor response requiring subjects to point with the hand towards a target seen under foveal vision, eliminating the retinal source of error. Subjects had first to orient their head to one of three positions to the right (0, 40, 80°) and maintain this head position while orienting gaze and pointing to one of five target positions (0, 20, 40, 60, 80°). This resulted in 11 combinations of static head and eye positions, and corresponded to five different gaze eccentricities. The accuracy of target pointing was tested without vision of the moving hand. Six subjects were tested. No systematic bias in finger pointing was observed for eye positions ranging from 0 to 40° to the right or left within the orbit. However, the variability (as measured by a surface error) given by the scatter of hand pointing increased quadratically with eye eccentricity. A similar observation was made with the eye centreed and the head position ranging from 0 to 80°, although the surface error increased less steeply with eccentricity. Some interaction between eye and head eccentricity also contributed to the pointing error. These results suggest that pointing should be most accurate with a head displacement corresponding to 90% of the gaze eccentricity. These results explain the systematic hypometry of head orienting towards targets observed under natural conditions: thus the respective contribution of head and eye to gaze orientation might be determined in order to optimize accuracy of target encoding.  相似文献   

3.
Manipulation of objects around the head requires an accurate and stable internal representation of their locations in space, also during movements such as that of the eye or head. For far space, the representation of visual stimuli for goal-directed arm movements relies on retinal updating, if eye movements are involved. Recent neurophysiological studies led us to infer that a transformation of visual space from retinocentric to a head-centric representation may be involved for visual objects in close proximity to the head. The first aim of this study was to investigate if there is indeed such a representation for remembered visual targets of goal-directed arm movements. Participants had to point toward an initially foveated central target after an intervening saccade. Participants made errors that reflect a bias in the visuomotor transformation that depends on eye displacement rather than any head-centred variable. The second issue addressed was if pointing toward the centre of a wide-field expanding motion pattern involves a retinal updating mechanism or a transformation to a head-centric map and if that process is distance dependent. The same pattern of pointing errors in relation to gaze displacement was found independent of depth. We conclude that for goal-directed arm movements, representation of the remembered visual targets is updated in a retinal frame, a mechanism that is actively used regardless of target distance, stimulus characteristics or the requirements of the task.  相似文献   

4.
The purposes of this study were to determine whether gaze direction provides a control signal for movement direction for a pointing task requiring a step and to gain insight into why discrepancies previously identified in the literature for endpoint accuracy with gaze directed eccentrically exist. Straight arm pointing movements were performed to real and remembered target locations, either toward or 30° eccentric to gaze direction. Pointing occurred in normal room lighting or darkness while subjects sat, stood still or side-stepped left or right. Trunk rotation contributed 22–65% to gaze orientations when it was not constrained. Error differences for different target locations explained discrepancies among previous experiments. Variable pointing errors were influenced by gaze direction, while mean systematic pointing errors and trunk orientations were influenced by step direction. These data support the use of a control strategy that relies on gaze direction and equilibrium inputs for whole-body goal-directed movements.  相似文献   

5.
Spatial orientation is crucial when subjects have to accurately reach memorized visual targets. In previous studies modified gravitoinertial force fields were used to affect the accuracy of pointing movements in complete darkness without visual feedback of the moving limb. Target mislocalization was put forward as one hypothesis to explain this decrease in accuracy of pointing movements. The aim of this study was to test this hypothesis by determining the accuracy of spatial localization of memorized visual targets in a perturbed gravitoinertial force field. As head orientation is involved in localization tasks and carrying relevant sensory systems (visual, vestibular and neck muscle proprioceptive), we also tested the effect of head posture on the accuracy of localization. Subjects (n=10) were seated off-axis on a rotating platform (120 degrees s(-1)) in complete darkness with the head fixed (head-fixed session) or free to move (head-free session). They were required to report verbally the egocentric spatial localization of visual memorized targets. They gave the perceived target location in direction (i.e. left or right) and in amplitude (in centimeters) relative to the direction they thought to be straight ahead. Results showed that the accuracy of visual localization decreased when subjects were exposed to inertial forces. Moreover, subjects localized the memorized visual targets more to the right than their actual position, that was in the direction of the inertial forces. With further analysis, it appeared that this shift of localization was concomitant with a shift of the visual straight ahead (VSA) in the opposite direction. Thus, the modified gravitoinertial force field led to a modification in the orientation of the egocentric reference frame. Furthermore, this shift of localization increased when the head was free to move while the head was tilted in roll toward the center of rotation of the platform and turned in yaw in the same direction. It is concluded that the orientation of the egocentric reference frame was influenced by the gravitoinertial vector.  相似文献   

6.
The aim of this investigation was to gain further insight into control strategies used for whole body reaching tasks. Subjects were requested to step and reach to remembered target locations in normal room lighting (LIGHT) and complete darkness (DARK) with their gaze directed toward or eccentric to the remembered target location. Targets were located centrally at three different heights. Eccentric anchors for gaze direction were located at target height and initial target distance, either 30° to the right or 20° to the left of target location. Control trials, where targets remained in place, and remembered target trials were randomly presented. We recorded movements of the hand, eye and head, while subjects stepped and reached to real or remembered target locations. Lateral, vertical and anterior–posterior (AP) hand errors and eye location, and gaze direction deviations were determined relative to control trials. Final hand location errors varied by target height, lighting condition and gaze eccentricity. Lower reaches in the DARK compared to the LIGHT condition were common, and when matched with a tendency to reach above the low target, help explain more accurate reaches for this target in darkness. Anchoring the gaze eccentrically reduced hand errors in the AP direction and increased errors in the lateral direction. These results could be explained by deviations in eye locations and gaze directions, which were deemed significant predictors of final reach errors, accounting for a 17–47% of final hand error variance. Results also confirmed a link between gaze deviations and hand and head displacements, suggesting that gaze direction is used as a common input for movement of the hand and body. Additional links between constant and variable eye deviations and hand errors were common for the AP direction but not for lateral or vertical directions. When combined with data regarding hand error predictions, we found that subjectsȁ9 alterations in body movement in the AP direction were associated with AP adjustments in their reach, but final hand position adjustments were associated with gaze direction alterations for movements in the vertical and horizontal directions. These results support the hypothesis that gaze direction provides a control signal for hand and body movement and that this control signal is used for movement direction and not amplitude.  相似文献   

7.
Eye-hand coordination requires the brain to integrate visual information with the continuous changes in eye, head, and arm positions. This is a geometrically complex process because the eyes, head, and shoulder have different centers of rotation. As a result, head rotation causes the eye to translate with respect to the shoulder. The present study examines the consequences of this geometry for planning accurate arm movements in a pointing task with the head at different orientations. When asked to point at an object, subjects oriented their arm to position the fingertip on the line running from the target to the viewing eye. But this eye-target line shifts when the eyes translate with each new head orientation, thereby requiring a new arm pointing direction. We confirmed that subjects do realign their fingertip with the eye-target line during closed-loop pointing across various horizontal head orientations when gaze is on target. More importantly, subjects also showed this head-position-dependent pattern of pointing responses for the same paradigm performed in complete darkness. However, when gaze was not on target, compensation for these translations in the rotational centers partially broke down. As a result, subjects tended to overshoot the target direction relative to current gaze; perhaps explaining previously reported errors in aiming the arm to retinally peripheral targets. These results suggest that knowledge of head position signals and the resulting relative displacements in the centers of rotation of the eye and shoulder are incorporated using open-loop mechanisms for eye-hand coordination, but these translations are best calibrated for foveated, gaze-on-target movements.  相似文献   

8.
Errors in pointing to actual and remembered targets presented in three-dimensional (3D) space in a dark room were studied under various conditions of visual feedback. During their movements, subjects either had no vision of their arms or of the target, vision of the target but not of their arms, vision of a light-emitting diode (LED) on their moving index fingertip but not of the target, or vision of an LED on their moving index fingertip and of the target. Errors depended critically upon feedback condition. 3D errors were largest for movements to remembered targets without visual feedback, diminished with vision of the moving fingertip, and diminished further with vision of the target and vision of the finger and the target. Moreover, the different conditions differentially influenced the radial distance, azimuth, and elevation errors, indicating that subjects control motion along all three axes relatively independently. The pattern of errors suggest that the neural systems that mediate processing of actual versus remembered targets may have different capacities for integrating visual and proprioceptive information in order to program spatially directed arm movements.  相似文献   

9.
Visual stimuli are initially represented in a retinotopic reference frame. To maintain spatial accuracy of gaze (i.e., eye in space) despite intervening eye and head movements, the visual input could be combined with dynamic feedback about ongoing gaze shifts. Alternatively, target coordinates could be updated in advance by using the preprogrammed gaze-motor command ("predictive remapping"). So far, previous experiments have not dissociated these possibilities. Here we study whether the visuomotor system accounts for saccadic eye-head movements that occur during target presentation. In this case, the system has to deal with fast dynamic changes of the retinal input and with highly variable changes in relative eye and head movements that cannot be preprogrammed by the gaze control system. We performed visual-visual double-step experiments in which a brief (50-ms) stimulus was presented during a saccadic eye-head gaze shift toward a previously flashed visual target. Our results show that gaze shifts remain accurate under these dynamic conditions, even for stimuli presented near saccade onset, and that eyes and head are driven in oculocentric and craniocentric coordinates, respectively. These results cannot be explained by a predictive remapping scheme. We propose that the visuomotor system adequately processes dynamic changes in visual input that result from self-initiated gaze shifts, to construct a stable representation of visual targets in an absolute, supraretinal (e.g., world) reference frame. Predictive remapping may subserve transsaccadic integration, thus enabling perception of a stable visual scene despite eye movements, whereas dynamic feedback ensures accurate actions (e.g., eye-head orienting) to a selected goal.  相似文献   

10.
This study investigated how binocular gaze is controlled to compensate for self-generated translational movements of the head where geometric requirements dictate that the ideal gaze signal needs to be modulated by the inverse of target distance. Binocular gaze (eye plus head) was measured for visual and remembered targets at various distances in six human subjects during active head translations at frequencies of 0.25, 0.5, 1.0, and 1.5 Hz. We found that, during head translations, gaze changes were achieved by a combination of eye and head rotations. Accordingly, stabilization performance was characterized by the gaze response parameters sensitivity and phase, where sensitivity is defined as the ratio of gaze velocity and translational eye velocity and where phase refers to the phase delay of gaze velocity relative to translational eye velocity. In the analysis, we used a binocular coordinate system yielding a version and a vergence component. We examined how frequency and target distance, estimated from the vergence angle, affected sensitivity and phase of the version component of the gaze signal and compared the results to the requirements for ideal performance. The relation between gaze sensitivity and the inverse of distance was characterized by a linear regression analysis. The ratio of the slope of the linear regression and the slope required for ideal stabilization provided a measure for the degree of "distance compensation." The results show that distance compensation was better for a visual target than for remembered targets in darkness, and behaved according to low-pass characteristics in both target conditions. It declined from 1.00 to 0.84 for visual targets and from 0.87 to 0.57 for remembered targets in the frequency range 0.25-1.5 Hz. The intercept obtained from the regression yielded the gaze response at zero vergence and specified a "default sensitivity" of gaze compensation. Default sensitivity increased with frequency from 0.02 at 0.25 Hz to 0.10 degrees/cm at 1.5 Hz for visual targets and from 0.04 to 0.16 degrees/cm in darkness. The phase delays of the gaze response increased on average from 2 to 7 degrees in the frequency range 0.25-1.5 Hz. In comparison with earlier passive studies, active translation compensation in the dark is superior at all frequencies where comparison was possible. We conclude that a nonvestibular signal with low-pass characteristics contributes to gaze during active head translations.  相似文献   

11.
Hay L  Redon C 《Neuroscience letters》2006,408(3):194-198
Pointing movements decrease in accuracy when target information is removed before movement onset. This time effect was analyzed in relation with the spatial representation of the target location, which can be egocentric (i.e. in relation to the body) or exocentric (i.e. in relation to the external world) depending on the visual environment of the target. The accuracy of pointing movements performed without visual feedback was measured in two delay conditions: 0 and 5-s delay between target removal and movement onset. In each delay condition, targets were presented either in the darkness (egocentric localization) or within a structured visual background (exocentric localization). The results show that pointing was more accurate when targets were presented within a visual background than in the darkness. The time-related decrease in accuracy was observed in the darkness condition, whereas no delay effect was found in the presence of a visual background. Therefore, contextual factors applied to a simple pointing action might induce different spatial representations: a short-lived sensorimotor egocentric representation used in immediate action control, or a long-lived perceptual exocentric representation which drives perception and delayed action.  相似文献   

12.
1. The accuracy with which subjects pointed to targets in extrapersonal space was assessed under a variety of experimental conditions. 2. When subjects pointed in the dark to remembered target locations, they made substantial errors. Errors in distance, measured from the shoulder to the target, were sometimes as much as 15 cm. Errors in direction, also measured from the shoulder, were smaller. 3. An analysis of the information transmitted by the location of the subject's finger about the location of the target showed that the information about the target's distance was consistently lower than the information about its direction. 4. The errors in distance persisted when subjects had their arm in view and pointed in the light to remembered target locations. 5. The errors were much smaller when subjects used a pointer to point to the target or when they were asked to reproduce the position of their finger after it had been passively moved to the target. 6. From these findings we conclude that subjects have a reasonably accurate visual representation of target location and are able to effectively use kinesthetically derived information about target location. We therefore suggest that errors in pointing result from errors in the sensorimotor transformation from the visual representation of the target location to the kinematic representation of the arm movement.  相似文献   

13.
The aim of this study was to: (1) quantify errors in open-loop pointing toward a spatially central (but retinally peripheral) visual target with gaze maintained in various eccentric horizontal, vertical, and oblique directions; and (2) determine the computational source of these errors. Eye and arm orientations were measured with the use of search coils while six head-fixed subjects looked and pointed toward remembered targets in complete darkness. On average, subjects made small exaggerations in both the vertical and horizontal components of retinal displacement (tending to overshoot the target relative to current gaze), but individual subjects showed considerable variations in this pattern. Moreover, pointing errors for oblique retinal targets were only partially predictable from errors for the cardinal directions, suggesting that most of these errors did not arise within independent vertical and horizontal coordinate channels. The remaining variance was related to nonhomogeneous, direction-dependent distortions in reading out the magnitudes and directions of retinal displacement. The largest and most consistent nonhomogeneities occurred as discontinuities between adjacent points across the vertical meridian of retinotopic space, perhaps related to the break between the representations of space in the left and right cortices. These findings are consistent with the hypothesis that at least some of these visuomotor distortions are due to miscalibrations in quasi-independent visuomotor readout mechanisms for "patches" of retinotopic space, with major discontinuities existing between patches at certain anatomic and/or physiological borders.  相似文献   

14.
Summary In human subjects, we investigated the accuracy of goal-directed arm movements performed without sight of the arm; errors of target localization and of motor control thus remained uncorrected by visual feedback, and became manifest as pointing errors. Target position was provided either as retinal eccentricity or as eye position. By comparing the results to those obtained previously with combined retinal plus extraretinal position cues, the relative contribution of the two signals towards visual localization could be studied. When target position was provided by retinal signals, pointing responses revealed an over-estimation of retinal eccentricity which was of similar size for all eccentricities tested, and was independent of gaze direction. These findings were interpreted as a magnification effect of perifoveal retinal areas. When target position was provided as eye position, pointing was characterized by a substantial inter-, and intra-subject variability, suggesting that the accuracy of localization by extraretinal signals is rather limited. In light of these two qualitatively different deficits, possible mechanisms are discussed how the two signals may interact towards a more veridical visual localization.  相似文献   

15.
Subjects reached in three-dimensional space to a set of remembered targets whose position was varied randomly from trial to trial, but always fell along a "virtual" line (line condition). Targets were presented briefly, one-by-one and in an empty visual field. After a short delay, subjects were required to point to the remembered target location. Under these conditions, the target was presented in the complete absence of allocentric visual cues as to its position in space. However, because the subjects were informed prior to the experiment that all targets would fall on a straight line, they could conceivably imagine each point target as belonging to a single rigid object with a particular geometry and orientation in space, although this virtual object was never explicitly shown to the subjects. We compared the responses to repeated measurements of each target with those measured for targets presented in a directionally neutral configuration (sphere condition), and used the variable errors to infer the putative reference frames underlying the corresponding sensorimotor transformation. Performance in the different tasks was compared under two different lighting conditions (dim light or total darkness) and two memory delays (0.5 or 5 s). The pattern of variable errors differed significantly between the sphere condition and the line condition. In the former case, the errors were always accounted for by egocentric reference frames. By contrast the errors in the line condition revealed both egocentric and allocentric components, consistent with the hypothesis that target information can be defined concurrently in both egocentric and allocentric frames of reference, resulting in two independent coexisting representations. Electronic Publication  相似文献   

16.
17.
We previously reported that Parkinson's disease patients could point with their eyes closed as accurately as normal subjects to targets in three-dimensional space that were initially presented with full vision. We have now further restricted visual information in order to more closely examine the individual and combined influences of visual information, proprioceptive feedback, and spatial working memory on the accuracy of Parkinson's disease patients. All trials were performed in the dark. A robot arm presented a target illuminated by a light-emitting diode at one of five randomly selected points composing a pyramidal array. Subjects attempted to "touch" the target location with their right finger in one smooth movement in three conditions: dark, no illumination of arm or target during movement; movement was to the remembered target location after the robot arm retracted; finger, a light-emitting diode on the pointing fingertip was visible during the movement but the target was extinguished; again, movement was to the remembered target location; and target, the target light-emitting diode remained in place and visible throughout the trial but there was no vision of the arm. In the finger condition, there is no need to use visual-proprioceptive integration, since the continuously visualized fingertip position can be compared to the remembered location of the visual target. In the target condition, the subject must integrate the current visible target with arm proprioception, while in the dark condition, the subject must integrate current proprioception from the arm with the remembered visual target. Parkinson's disease patients were significantly less accurate than controls in both the dark and target conditions, but as accurate as controls in the finger condition. Parkinson's disease patients, therefore, were selectively impaired in those conditions (target and dark) which required integration of visual and proprioceptive information in order to achieve accurate movements. In contrast, the patients' normal accuracy in the finger condition indicates that they had no substantial deficits in their relevant spatial working memory. Final arm configurations were significantly different in the two subject groups in all three conditions, even in the finger condition where mean movement endpoints were not significantly different. Variability of the movement endpoints was uniformly increased in Parkinson's disease patients across all three conditions.The current study supports an important role for the basal ganglia in the integration of proprioceptive signals with concurrent or remembered visual information that is needed to guide movements. This role can explain much of the patients' dependence on visual information for accuracy in targeted movements. It also underlines what may be an essential contribution of the basal ganglia to movement, the integration of afferent information that is initially processed through multiple, discrete modality-specific pathways, but which must be combined into a unified and continuously updated spatial model for effective, accurate movement.  相似文献   

18.
This study investigated whether the execution of an accurate pointing response depends on a prior saccade orientation towards the target, independent of the vision of the limb. A comparison was made between the accuracy of sequential responses (in which the starting position of the hand is known and the eye centred on the target prior to the onset of the hand pointing movement) and synergetic responses (where both hand and gaze motions are simultaneously initiated on the basis of unique peripheral retinal information). The experiments were conducted in visual closed-loop (hand visible during the pointing movement) and in visual openloop conditions (vision of hand interrupted as the hand started to move). The latter condition eliminated the possibility of a direct visual evaluation of the error between hand and target during pointing. Three main observations were derived from the present work: (a) the timing of coordinated eye-head-hand pointing at visual targets can be modified, depending on the executed task, without a deterioration in the accuracy of hand pointing; (b) mechanical constraints or instructions such as preventing eye, head or trunk motion, which limit the redundancy of degrees of freedom, lead to a decrease in accuracy; (c) the synergetic movement of eye, head and hand for pointing at a visible target is not trivially the superposition of eye and head shifts added to hand pointing. Indeed, the strategy of such a coordinated action can modify the kinematics of the head in order to make the movements of both head and hand terminate at approximately the same time. The main conclusion is that eye-head coordination is carried out optimally by a parallel processing in which both gaze and hand motor responses are initiated on the basis of a poorly defined retinal signal. The accuracy in hand pointing is not conditioned by head movement per se and does not depend on the relative timing of eye, head and hand movements (synergetic vs sequential responses). However, a decrease in the accuracy of hand pointing was observed in the synergetic condition, when target fixation was not stabilised before the target was extinguished. This suggests that when the orienting saccade reaches the target before hand movement onset, visual updating of the hand motor control signal may occur. A rapid processing of this final input allows a sharper redefinition of the hand landing point.  相似文献   

19.
People naturally direct their gaze to visible hand movement goals. Doing so improves reach accuracy through use of signals related to gaze position and visual feedback of the hand. Here, we studied where people naturally look when acting on remembered target locations. Four targets were presented on a screen, in peripheral vision, while participants fixed a central cross (encoding phase). Four seconds later, participants used a pen to mark the remembered locations while free to look wherever they wished (recall phase). Visual references, including the screen and the cross, were present throughout. During recall, participants neither looked at the marked locations nor prevented eye movements. Instead, gaze behavior was erratic and was comprised of gaze shifts loosely coupled in time and space with hand movements. To examine whether eye and hand movements during encoding affected gaze behavior during recall, in additional encoding conditions, participants marked the visible targets with either free gaze or with central cross fixation or just looked at the targets. All encoding conditions yielded similar erratic gaze behavior during recall. Furthermore, encoding mode did not influence recall performance, suggesting that participants, during recall, did not exploit sensorimotor memories related to hand and gaze movements during encoding. Finally, we recorded a similar lose coupling between hand and eye movements during an object manipulation task performed in darkness after participants had viewed the task environment. We conclude that acting on remembered versus visible targets can engage fundamentally different control strategies, with gaze largely decoupled from movement goals during memory-guided actions.  相似文献   

20.
Visual information is vital for fast and accurate hand movements. It has been demonstrated that allowing free eye movements results in greater accuracy than when the eyes maintain centrally fixed. Three explanations as to why free gaze improves accuracy are: shifting gaze to a target allows visual feedback in guiding the hand to the target (feedback loop), shifting gaze generates ocular-proprioception which can be used to update a movement (feedback–feedforward), or efference copy could be used to direct hand movements (feedforward). In this experiment we used a double-step task and manipulated the utility of ocular-proprioceptive feedback from eye to head position by removing the second target during the saccade. We confirm the advantage of free gaze for sequential movements with a double-step pointing task and document eye–hand lead times of approximately 200 ms for both initial movements and secondary movements. The observation that participants move gaze well ahead of the current hand target dismisses foveal feedback as a major contribution. We argue for a feedforward model based on eye movement efference as the major factor in enabling accurate hand movements. The results with the double-step target task also suggest the need for some buffering of efference and ocular-proprioceptive signals to cope with the situation where the eye has moved to a location ahead of the current target for the hand movement. We estimate that this buffer period may range between 120 and 200 ms without significant impact on hand movement accuracy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号