首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
The aim of this investigation was to gain further insight into control strategies used for whole body reaching tasks. Subjects were requested to step and reach to remembered target locations in normal room lighting (LIGHT) and complete darkness (DARK) with their gaze directed toward or eccentric to the remembered target location. Targets were located centrally at three different heights. Eccentric anchors for gaze direction were located at target height and initial target distance, either 30° to the right or 20° to the left of target location. Control trials, where targets remained in place, and remembered target trials were randomly presented. We recorded movements of the hand, eye and head, while subjects stepped and reached to real or remembered target locations. Lateral, vertical and anterior–posterior (AP) hand errors and eye location, and gaze direction deviations were determined relative to control trials. Final hand location errors varied by target height, lighting condition and gaze eccentricity. Lower reaches in the DARK compared to the LIGHT condition were common, and when matched with a tendency to reach above the low target, help explain more accurate reaches for this target in darkness. Anchoring the gaze eccentrically reduced hand errors in the AP direction and increased errors in the lateral direction. These results could be explained by deviations in eye locations and gaze directions, which were deemed significant predictors of final reach errors, accounting for a 17–47% of final hand error variance. Results also confirmed a link between gaze deviations and hand and head displacements, suggesting that gaze direction is used as a common input for movement of the hand and body. Additional links between constant and variable eye deviations and hand errors were common for the AP direction but not for lateral or vertical directions. When combined with data regarding hand error predictions, we found that subjectsȁ9 alterations in body movement in the AP direction were associated with AP adjustments in their reach, but final hand position adjustments were associated with gaze direction alterations for movements in the vertical and horizontal directions. These results support the hypothesis that gaze direction provides a control signal for hand and body movement and that this control signal is used for movement direction and not amplitude.  相似文献   

2.
A moving background alters the perceived direction of target motion (the Duncker illusion). To test whether this illusion also affects pointing movements to remembered/extrapolated target locations, we constructed a display in which a target moved in a straight line and disappeared behind a band of moving random dots. Subjects were required to touch the spot where the target would emerge from the occlusion. The four directions of random-dot motion induced pointing errors that were predictable from the Duncker illusion. Because it has been previously established that saccadic direction is influenced by this illusion, gaze was subsequently recorded in a second series of experiments while subjects performed the pointing task and a similar task with eye-tracking only. In the pointing task, subjects typically saccaded to the lower border of the occlusion zone as soon as the target disappeared and then tried to maintain fixation at that spot. However, it was particularly obvious in the eye-tracking-only condition that horizontally moving random dots generally evoked an appreciable ocular following response, altering the gaze direction. Hand-pointing errors were related to the saccadic gaze error but were more highly correlated with final gaze errors (resulting from the initial saccade and the subsequent ocular following response). The results suggest a model of limb control in which gaze position can provide the target signal for limb movement.  相似文献   

3.
We have investigated pointing movements toward remembered targets after an intervening self-generated body movement. We tested to what extent visual information about the environment or finger position is used in updating target position relative to the body after a step and whether gaze plays a role in the accuracy of the pointing movement. Subjects were tested in three visual conditions: complete darkness (DARK), complete darkness with visual feedback of the finger (FINGER), and with vision of a well-defined environment and with feedback of the finger (FRAME). Pointing accuracy was rather poor in the FINGER and DARK conditions, which did not provide vision of the environment. Constant pointing errors were mainly in the direction of the step and ranged from about 10 to 20 cm. Differences between binocular fixation and target position were often related to the step size and direction. At the beginning of the trial, when the target was visible, fixation was on target. After target extinction, fixation moved away from the target relative to the subject. The variability in the pointing positions appeared to be related to the variable errors in fixation, and the co-variance increases during the delay period after the step, reaching a highly significant value at the time of pointing. The significant co-variance between fixation position and pointing is not the result of a mutual dependence on the step, since we corrected for any direct contributions of the step in both signals. We conclude that the co-variance between fixation and pointing position reflects 1) a common command signal for gaze and arm movements and 2) an effect of fixation on pointing accuracy at the time of pointing.  相似文献   

4.
Eye-hand coordination requires the brain to integrate visual information with the continuous changes in eye, head, and arm positions. This is a geometrically complex process because the eyes, head, and shoulder have different centers of rotation. As a result, head rotation causes the eye to translate with respect to the shoulder. The present study examines the consequences of this geometry for planning accurate arm movements in a pointing task with the head at different orientations. When asked to point at an object, subjects oriented their arm to position the fingertip on the line running from the target to the viewing eye. But this eye-target line shifts when the eyes translate with each new head orientation, thereby requiring a new arm pointing direction. We confirmed that subjects do realign their fingertip with the eye-target line during closed-loop pointing across various horizontal head orientations when gaze is on target. More importantly, subjects also showed this head-position-dependent pattern of pointing responses for the same paradigm performed in complete darkness. However, when gaze was not on target, compensation for these translations in the rotational centers partially broke down. As a result, subjects tended to overshoot the target direction relative to current gaze; perhaps explaining previously reported errors in aiming the arm to retinally peripheral targets. These results suggest that knowledge of head position signals and the resulting relative displacements in the centers of rotation of the eye and shoulder are incorporated using open-loop mechanisms for eye-hand coordination, but these translations are best calibrated for foveated, gaze-on-target movements.  相似文献   

5.
The aim of this study was to further understand how the brain represents spatial information for shaping aiming movements to targets. Both behavioral and neurophysiological studies have shown that the brain represents spatial memory for reaching targets in an eye-fixed frame. To date, these studies have only shown how the brain stores and updates target locations for generating a single arm movement. But once a target's location has been computed relative to the hand to program a pointing movement, is that information reused for subsequent movements to the same location? Or is the remembered target location reconverted from eye to motor coordinates each time a pointing movement is made? To test between these two possibilities, we had subjects point twice to the remembered location of a previously foveated target after shifting their gaze to the opposite side of the target site before each pointing movement. When we compared the direction of pointing errors for the second movement to those of the first, we found that errors for each movement varied as a function of current gaze so that pointing endpoints fell on opposite sides of the remembered target site in the same trial. Our results suggest that when shaping multiple pointing movements to the same location the brain does not use information from the previous arm movement such as an arm-fixed representation of the target but instead mainly uses the updated eye-fixed representation of the target to recalculate its location into the appropriate motor frame.  相似文献   

6.
7.
We investigated whether the order of gaze shifts affected spatial and temporal aspects of discrete bimanual pointing movements. Ten male participants concurrently executed bimanual pointing movements as quickly and accurately as possible to left and right lateral targets presented with the same and different amplitudes. They were asked to gaze initially at the left target and subsequently at the right target, or vice versa. Each hand showed less variable error and a faster reaction when the initial gaze shifted to the corresponding target than when the subsequent gaze shifted to it. For the same-amplitude targets, constant error (CE) was not influenced by the gaze order conditions. However, for the different-amplitude targets, CE for the short-amplitude target became larger when they initially gazed at the long-amplitude target than when they initially gazed at the short-amplitude target. The larger overshoot of the hand for the short-amplitude target occurred when the participants could not afford to foveate the target. Our results suggest that the order of gaze shifts determines whether asymmetric amplitude assimilation between the two hands occurs or not. Fast, consistent, and accurate bimanual pointing movements might be attributable to updating gaze-centered representations of target positions.  相似文献   

8.
This study examined two-segment pointing movements with various accuracy constraints to test whether there is segment interdependency in saccadic eye movements that accompany manual actions. The other purpose was to examine how planning of movement accuracy and amplitude for the second pointing influences the timing of gaze shift to the second target at the transition between two segments. Participants performed a rapid two-segment pointing task, in which the first segment had two target sizes, and the second segment had two target sizes and two movement distances. The results showed that duration and peak velocity of the initial pointing were influenced by altered kinematic characteristics of the second pointing due to task manipulations of the second segment, revealing segment interdependency in hand movements. In contrast, saccade duration and velocity did not show such segment interdependency. Thus, unlike hand movements, saccades are planned and organized independently for each segment during sequential manual actions. In terms of the timing of gaze shift to the second target, this was delayed when the initial pointing was made to the smaller first target, indicating that gaze anchoring to the initial target is used to verify the pointing termination. Importantly, the gaze shift was delayed when the second pointing was made to the smaller or farther second target. This suggests that visual information of the hand position at the initial target is important for the planning of movement distance and accuracy of the next pointing. Furthermore, timings of gaze shift and pointing initiation to the second target were highly correlated. Thus, at the transition between two segments, gazes and hand movements are highly coupled in time, which allows the sensorimotor system to process visual and proprioceptive information for the verification of pointing termination and planning of the next pointing.  相似文献   

9.
Summary In human subjects, we investigated the accuracy of goal-directed arm movements performed without sight of the arm; errors of target localization and of motor control thus remained uncorrected by visual feedback, and became manifest as pointing errors. Target position was provided either as retinal eccentricity or as eye position. By comparing the results to those obtained previously with combined retinal plus extraretinal position cues, the relative contribution of the two signals towards visual localization could be studied. When target position was provided by retinal signals, pointing responses revealed an over-estimation of retinal eccentricity which was of similar size for all eccentricities tested, and was independent of gaze direction. These findings were interpreted as a magnification effect of perifoveal retinal areas. When target position was provided as eye position, pointing was characterized by a substantial inter-, and intra-subject variability, suggesting that the accuracy of localization by extraretinal signals is rather limited. In light of these two qualitatively different deficits, possible mechanisms are discussed how the two signals may interact towards a more veridical visual localization.  相似文献   

10.
We examined the role of gaze in a task where subjects had to reproduce the position of a remembered visual target with the tip of the index finger, referred to as pointing. Subjects were tested in 3 visual feedback conditions: complete darkness (dark), complete darkness with visual feedback of the finger position (finger), and with vision of a well-defined environment and feedback of the finger position (frame). Pointing accuracy increases with feedback about the finger or visual environment. In the finger and frame conditions, the 95% confidence regions of the variable errors have an ellipsoidal distribution with the main axis oriented toward the subjects' head. During the 1-s period when the target is visible, gaze is almost on target. However, gaze drifts away from the target relative to the subject in the delay period after target disappearance. In the finger and frame conditions, gaze returns toward the remembered target during pointing. In all 3 feedback conditions, the correlations between the variable errors of gaze and pointing position increase during the delay period, reaching highly significant values at the time of pointing. Our results demonstrate that gaze affects the accuracy of pointing. We conclude that the covariance between gaze and pointing position reflects a common drive for gaze and arm movements and an effect of gaze on pointing accuracy at the time of pointing. Previous studies interpreted the orientation of variable errors as indicative for a frame of reference used for pointing. Our results suggest that the orientation of the error ellipses toward the head is at least partly the result of gaze drift in the delay period.  相似文献   

11.
Accurate information about gaze direction is required to direct the hand towards visual objects in the environment. In the present experiments, we tested whether retinal inputs affect the accuracy with which healthy subjects indicate their gaze direction with the unseen index finger after voluntary saccadic eye movements. In experiment 1, subjects produced a series of back and forth saccades (about eight) of self-selected magnitudes before positioning the eyes in a self-chosen direction to the right. The saccades were produced while facing one of four possible visual scenes: (1) complete darkness, (2) a scene composed of a single light-emitting diode (LED) located at 18 degrees to the right, (3) a visually enriched scene made up of three LEDs located at 0 degrees, 18 degrees and 36 degrees to the right, or (4) a normally illuminated scene where the lights in the experimental room were turned on. Subjects were then asked to indicate their gaze direction with their unseen index finger. In the conditions where the visual scenes were composed of LEDs, subjects were instructed to foveate or not foveate one of the LEDs with their last saccade. It was therefore possible to compare subjects' accuracy when pointing in the direction of their gaze in conditions with and without foveal stimulation. The results showed that the accuracy of the pointing movements decreased when subjects produced their saccades in a dark environment or in the presence of a single LED compared to when the saccades were generated in richer visual environments. Visual stimulation of the fovea did not increase subjects' accuracy when pointing in the direction of their gaze compared to conditions where there was only stimulation of the peripheral retina. Experiment 2 tested how the retinal signals could contribute to the coding of eye position after saccadic eye movements. More specifically, we tested whether the shift in the retinal image of the environment during the saccades provided information about the reached position of the eyes. Subjects produced their series of saccades while facing a visual environment made up of three LEDs. In some trials, the whole visual scene was displaced either 4.5 degrees to the left or 3 degrees to the right during the primary saccade. These displacements created mismatches between the shift of the retinal image of the environment and the extent of gaze deviation. The displacements of the visual scene were not perceived by the subjects because they occurred near the peak velocity of the saccade (saccadic suppression phenomenon). Pointing accuracy was not affected by the unperceived shifts of the visual scene. The results of these experiments suggest that the arm motor system receives more precise information about gaze direction when there is retinal stimulation than when there is none. They also suggest that the most relevant factor in defining gaze direction is not the retinal locus of the visual stimulation (that is peripheral or foveal) but rather the amount of visual information. Finally, the results suggest an enhanced egocentric encoding of gaze direction by the retinal inputs and do not support a retinotopic model for encoding gaze direction.  相似文献   

12.
During visually guided manual movements, gaze is usually fixated to a target until a pointing movement is completed to that target, showing gaze anchoring. We previously examined gaze anchoring during a two-segment eye–hand task under a low accuracy constraint. Eye movements were made to predetermined first and second targets, while hand movements were varied across two conditions: (1) stop at the first target and discontinue (HS1) and (2) stop at both the first and the second targets (HS1S2). Young adults previously broke gaze anchoring at the first target only when the second pointing was excluded (HS1). However, older adults did not break gaze anchoring for either condition. The present study further investigated whether young and older adults break gaze anchoring through short-term practice under the same conditions. An HS1 practice proceeded to an HS1S2 practice. The results showed that the timing of terminating gaze anchoring relative to pointing completion oscillated considerably during the HS1 practice until it was stabilized. Conversely, that timing was stable during the HS1S2 practice. Nevertheless, the young adults benefited from the HS1 practice and broke gaze anchoring even when the second pointing was included in HS1S2. This indicates that gaze anchoring to pointing completion is not a prerequisite for the production of subsequent pointing. By contrast, older adults did not improve the timing of gaze anchoring termination for either practice condition, thereby failing to break gaze anchoring. Thus, aging compromises a predictive control of terminating gaze anchoring relative to pointing completion, which is difficult to overcome through short-term practice.  相似文献   

13.
 The analysis of errors in two-joint reaching movements has provided clues about sensorimotor processing algorithms. The present study extends this focus to situations where the head, trunk, and legs join with the arm to help reach targets placed slightly beyond arm’s length. Subjects reached accurately to touch ”real targets” or reached to the remembered locations of ”virtual targets” (i.e., targets removed at the start of the reach). Subjects made large errors in the virtual-target condition and these errors were analyzed with the aim of revealing the implications for whole-body coordination. Subjects were found to rotate the head less in the virtual-target condition (when compared with accurate movements to real targets). This resulted in a more limited range of head postures, and the final head angles at the end of the movements were geometrically related to the incorrect hand locations, perhaps accounting for some portion of the errors. This suggests that head-eye-hand coordination plays an important role in the organization of these movements and leads to the hypothesis that a representation of current gaze direction may serve as a reference signal for arm motor control. Received: 1 October 1998 / Accepted: 7 January 1999  相似文献   

14.
A well-coordinated pattern of eye and hand movements can be observed during goal-directed arm movements. Typically, a saccadic eye movement precedes the arm movement, and its occurrence is temporally correlated with the start of the arm movement. Furthermore, the coupling of gaze and aiming movements is also observable after pointing initiation. It has recently been observed that saccades cannot be directed to new target stimuli, away from a pointing target stimulus. Saccades directed to targets presented during the final phase of a pointing movement were delayed until after pointing movement offset ("gaze anchoring"). The present study investigated whether ocular gaze is anchored to a pointing target during the entire pointing movement. In experiment 1, new targets were presented at various times during the duration of a pointing movement, triggered by the kinematics arm moment itself (movement onset, peak acceleration/velocity/deceleration, and offset). Subjects had to make a saccade to the new target as fast as possible while maintaining the pointing movement to the initial target. Saccadic latencies were increased by an amount of time that approximately equaled the remaining pointing time after saccadic target presentation, with the majority of saccades executed after pointing movement offset. The nature of the signal driving gaze stabilization during pointing was investigated in experiment 2. In previous experiments where ocular gaze was anchored to a pointing target, subjects could always see their moving arm, thus it was unknown whether a visual image of the moving arm, an afferent (proprioceptive) signal or an efferent (motor control related) signal produced gaze anchoring. In experiment 2 subjects had to point with or without vision of the moving arm to test whether a visual signal is used to anchor gaze to a pointing target. Results indicate that gaze anchoring was also observed without vision of the moving arm. The findings support the existence of a mechanism enforcing ocular gaze anchoring during the entire duration of a pointing movement. Moreover, such a mechanism uses an internally generated, or proprioceptive, nonvisual signal. Possible neural substrates underlying these processes are discussed, as well as the role of selective attention.  相似文献   

15.
 Invariant patterns in the distribution of the endpoints of reaching movements have been used to suggest that two important movement parameters of reaching movements, direction and extent, are planned by two independent processing channels. This study examined this hypothesis by testing the effect of task conditions on variable errors of direction and extent of reaching movements. Subjects made reaching movements to 25 target locations in a horizontal workspace, in two main task conditions. In task 1, subjects looked directly at the target location on the horizontal workspace before closing their eyes and pointing to it. In task 2, arm movements were made to the same target locations in the same horizontal workspace, but target location was displayed on a vertical screen in front of the subjects. For both tasks, variable errors of movement extent (on-axis error) were greater than for movement direction (off-axis error). As a result, the spatial distributions of endpoints about a given target usually formed an ellipse, with the principal axis oriented in the mean movement direction. Also, both on- and off-axis errors increased with movement amplitude. However, the magnitude of errors, especially on-axis errors, scaled differently with movement amplitude in the two task conditions. This suggests that variable errors of direction and extent can be modified independently by changing the nature of the sensorimotor transformations required to plan the movements. This finding is further evidence that the direction and extent of reaching movements appear to be controlled independently by the motor system. Received: 8 October 1996 / Accepted: 14 January 1997  相似文献   

16.
Visual information is vital for fast and accurate hand movements. It has been demonstrated that allowing free eye movements results in greater accuracy than when the eyes maintain centrally fixed. Three explanations as to why free gaze improves accuracy are: shifting gaze to a target allows visual feedback in guiding the hand to the target (feedback loop), shifting gaze generates ocular-proprioception which can be used to update a movement (feedback–feedforward), or efference copy could be used to direct hand movements (feedforward). In this experiment we used a double-step task and manipulated the utility of ocular-proprioceptive feedback from eye to head position by removing the second target during the saccade. We confirm the advantage of free gaze for sequential movements with a double-step pointing task and document eye–hand lead times of approximately 200 ms for both initial movements and secondary movements. The observation that participants move gaze well ahead of the current hand target dismisses foveal feedback as a major contribution. We argue for a feedforward model based on eye movement efference as the major factor in enabling accurate hand movements. The results with the double-step target task also suggest the need for some buffering of efference and ocular-proprioceptive signals to cope with the situation where the eye has moved to a location ahead of the current target for the hand movement. We estimate that this buffer period may range between 120 and 200 ms without significant impact on hand movement accuracy.  相似文献   

17.
In previous studies, we provided evidence for a directional distortion of the endpoints of movements to memorized target locations. This distortion was similar to a perceptual distortion in direction discrimination known as the oblique effect so we named it the “motor oblique effect”. In this report we analyzed the directional errors during the evolution of the movement trajectory in memory guided and visually guided pointing movements and compared them with directional errors in a perceptual experiment of arrow pointing. We observed that the motor oblique effect was present in the evolving trajectory of both memory and visually guided reaching movements. In memory guided pointing the motor oblique effect did not disappear during trajectory evolution while in visually guided pointing the motor oblique effect disappeared with decreasing distance from the target and was smaller in magnitude compared to the perceptual oblique effect and the memory motor oblique effect early on after movement initiation. The motor oblique effect in visually guided pointing increased when reaction time was small and disappeared with larger reaction times. The results are best explained using the hypothesis that a low level oblique effect is present for visually guided pointing movements and this effect is corrected by a mechanism that does not depend on visual feedback from the trajectory evolution and might even be completed during movement planning. A second cognitive oblique effect is added in the perceptual estimation of direction and affects the memory guided pointing movements. It is finally argued that the motor oblique effect can be a useful probe for the study of perception–action interaction.  相似文献   

18.
This purpose of this study was to examine the spatial coding of eye movements during static roll tilt (up to ±45°) relative to perceived earth and head orientations. Binocular videographic recordings obtained in darkness from eight subjects allowed us to quantify the mean deviations in gaze trajectories along both horizontal and vertical coordinates relative to the true earth and head orientations. We found that both variability and curvature of gaze trajectories increased with roll tilt. The trajectories of eye movements made along the perceived earth-horizontal (PEH) were more accurate than movements along the perceived head-horizontal (PHH). The trajectories of both PEH and PHH saccades tended to deviate in the same direction as the head tilt. The deviations in gaze trajectories along the perceived earth-vertical (PEV) and perceived head-vertical (PHV) were both similar to the PHH orientation, except that saccades along the PEV deviated in the opposite direction relative to the head tilt. The magnitude of deviations along the PEV, PHH, and PHV corresponded to perceptual overestimations of roll tilt obtained from verbal reports. Both PEV gaze trajectories and perceptual estimates of tilt orientation were different following clockwise rather than counterclockwise tilt rotation; however, the PEH gaze trajectories were less affected by the direction of tilt rotation. Our results suggest that errors in gaze trajectories along PEV and perceived head orientations increase during roll tilt in a similar way to perceptual errors of tilt orientation. Although PEH and PEV gaze trajectories became nonorthogonal during roll tilt, we conclude that the spatial coding of eye movements during roll tilt is overall more accurate for the perceived earth reference frame than for the perceived head reference frame. Received: 22 April 1997 / Accepted: 18 December 1997  相似文献   

19.
The abilities of human subjects to perform reach and grasp movements to remembered locations/ orientations of a cylindrical object were studied under four conditions: (1) visual presentation of the object — reach with vision allowed; (2) visual presentation — reach while blindfolded; (3) kinesthetic presentation of the object-reach while blindfolded and (4) kinesthetic presentation-reach with vision. The results showed that subjects were very accurate in locating the object in the purely kinesthetic condition and that directional errors were low in all four conditions; but, predictable errors in reach distance occurred in conditions 1,2, and 4. The pattern of these distance errors was similar to that identified in previous research using a pointing task to a small target (i.e., overshoots of close targets, undershoots of far targets). The observation that the pattern of distance errors in condition 4 was similar to that of conditions 1 and 2 suggests that subjects transform kinesthetically defined hand locations into a visual coordinate system when vision is available during upper limb motion to a remembered kinesthetic target. The differences in orientation of the upper limb between target and reach positions in condition 3 were similar in magnitude to the errors associated with kinesthetic perceptions of arm and hand orientations in three-dimensional space reported in previous studies. However, fingertip location was specified with greater accuracy than the orientation of upper limb segments. This was apparently accomplished by compensation of variations in shoulder (arm) angles with oppositely directed variations in elbow joint angles. Subjects were also able to transform visually perceived object orientation into an appropriate hand orientation for grasp, as indicated by the relation between hand roll angle and object orientation (elevation angle). The implications of these results for control of upper limb motion to external targets are discussed.  相似文献   

20.
At some stage in the process of a sensorimotor transformation for a reaching movement, information about the current position of the hand and information about the location of the target must be encoded in the same frame of reference to compute the hand-to-target difference vector. Two main hypotheses have been proposed regarding this reference frame: an eye-centered and a body-centered frame. Here we evaluated these hypotheses using the pointing errors that subjects made when planning and executing arm movements to memorized targets starting from various initial hand positions while keeping gaze fixed in various directions. One group of subjects (n = 10) was tested without visual information about hand position during movement planning (unseen-hand condition); another group (n = 8) was tested with hand and target position simultaneously visible before movement onset (seen-hand condition). We found that both initial hand position and gaze fixation direction had a significant effect on the magnitude and direction of the pointing error. Errors were significantly smaller in the seen-hand condition. For both conditions, though, a reference frame analysis showed that the errors arose at an eye- or hand-centered stage or both, but not at a body-centered stage. As a common reference frame is required to specify a movement vector, these results suggest that an eye-centered mechanism is involved in integrating target and hand position in programming reaching movements. We discuss how simple gain elements modulating the eye-centered target and hand-position signals can account for these results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号