首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
We have investigated pointing movements toward remembered targets after an intervening self-generated body movement. We tested to what extent visual information about the environment or finger position is used in updating target position relative to the body after a step and whether gaze plays a role in the accuracy of the pointing movement. Subjects were tested in three visual conditions: complete darkness (DARK), complete darkness with visual feedback of the finger (FINGER), and with vision of a well-defined environment and with feedback of the finger (FRAME). Pointing accuracy was rather poor in the FINGER and DARK conditions, which did not provide vision of the environment. Constant pointing errors were mainly in the direction of the step and ranged from about 10 to 20 cm. Differences between binocular fixation and target position were often related to the step size and direction. At the beginning of the trial, when the target was visible, fixation was on target. After target extinction, fixation moved away from the target relative to the subject. The variability in the pointing positions appeared to be related to the variable errors in fixation, and the co-variance increases during the delay period after the step, reaching a highly significant value at the time of pointing. The significant co-variance between fixation position and pointing is not the result of a mutual dependence on the step, since we corrected for any direct contributions of the step in both signals. We conclude that the co-variance between fixation and pointing position reflects 1) a common command signal for gaze and arm movements and 2) an effect of fixation on pointing accuracy at the time of pointing.  相似文献   

3.
It has been suggested that the basal ganglia preferentially contribute to movements made to remembered targets, whereas the cerebellum preferentially contributes to movements based on visual cues. Thus, it is possible that eye-hand coordination may differ in these two types of movement. To examine this issue we compared the response characteristics of combined eye and hand movements made towards visual versus remembered targets. In addition, the influence of the eye movement on the hand movement was investigated by comparing the effects of visual fixation in each task. Our results demonstrated that hand movement amplitude was greater when the hand movements were produced in isolation versus in combination with an eye movement. This was true regardless of whether the movement was made to a visual or a remembered target. This suggests that the integration of eye position information into the manual motor response occurs at a common neural site for both tasks. By contrast, the timing between saccade and hand onsets and offsets differed in the two conditions. This is consistent with the idea that the timing inherent in eye-hand coordination is the result of separate processing within either the basal ganglia or cerebellar systems. Taken together, the results from this study demonstrate that certain processes underlying eye-hand coordination during movements to visual versus remembered targets share a common neural substrate whereas others function independently.  相似文献   

4.
Single-neuron responses in motor and premotor cortex were recorded during a movement-sequence delay task. On each trial the monkey viewed a randomly selected sequence of target lights arrayed in two-dimensional space, remembered the sequence during a delay period, and then generated a coordinated sequence of movements to the remembered targets. Of 307 neurons studied, 25% were tuned specifically for either the first or the second target, but not both. In particular, for neurons tuned during both target presentations, tuned activity related to a particular first target direction were maintained during the presentation of a second target in a different direction. During the delay period, 32% of the neurons were tuned for upcoming movement in a single direction. These delay period responses often reflected activity patterns that first developed during target presentations and may therefore act to maintain target period information during the delay. Neurons with tuned activity during both the delay and movement periods exhibited two patterns: the first exhibited tuned responses during the delay that were correlated with the tuning of first-movement responses, while the second pattern showed delay-period tuning that was better correlated with tuned responses during second movements. This indicates that, before movement, distinct neural populations are correlated with specific movements in a sequence. About half the neurons studied were not directionally tuned during the initiation, target, or delay periods, but did show systematic changes in activity during task performance. Some (34%) were exclusively tuned during movement and appear to be involved in the direct control of movement. Others (17%) showed changes in firing rate from period to period within a trial but showed no directional preference for a particular direction of movement. Population analyses of tuned activity during the target and delay periods indicated that accurate directional information about both first and second movements was available in the neuronal ensemble well before reaching began. These results extend the idea that both motor and premotor cortex play a role in reaching behavior other than the direct control of muscles. While some early neural responses resembled muscle activation patterns involved in maintaining fixed postures before movement, others probably relate to the sensory-to-motor transformations, information storage in short-term memory, and movement preparation required to generate accurate reaching to remembered locations in space.  相似文献   

5.
We examined the role of gaze in a task where subjects had to reproduce the position of a remembered visual target with the tip of the index finger, referred to as pointing. Subjects were tested in 3 visual feedback conditions: complete darkness (dark), complete darkness with visual feedback of the finger position (finger), and with vision of a well-defined environment and feedback of the finger position (frame). Pointing accuracy increases with feedback about the finger or visual environment. In the finger and frame conditions, the 95% confidence regions of the variable errors have an ellipsoidal distribution with the main axis oriented toward the subjects' head. During the 1-s period when the target is visible, gaze is almost on target. However, gaze drifts away from the target relative to the subject in the delay period after target disappearance. In the finger and frame conditions, gaze returns toward the remembered target during pointing. In all 3 feedback conditions, the correlations between the variable errors of gaze and pointing position increase during the delay period, reaching highly significant values at the time of pointing. Our results demonstrate that gaze affects the accuracy of pointing. We conclude that the covariance between gaze and pointing position reflects a common drive for gaze and arm movements and an effect of gaze on pointing accuracy at the time of pointing. Previous studies interpreted the orientation of variable errors as indicative for a frame of reference used for pointing. Our results suggest that the orientation of the error ellipses toward the head is at least partly the result of gaze drift in the delay period.  相似文献   

6.
The accuracy of visually guided pointing movements decreases with speed. We have shown that for movements to a visually defined remembered target, the variability of the final arm endpoint position does not depend on movement speed. We put forward a hypothesis that this observation can be explained by suggesting that movements directed at remembered targets are produced without ongoing corrections. In the present study, this hypothesis was tested for pointing movements in 3D space to kinesthetically defined remembered targets. Passive versus active acquisition of kinesthetic information was contrasted. Pointing errors, movement kinematics, and joint-angle coordination were analyzed. The movements were performed at a slow speed (average peak tangential velocity of about 1.2 m/s) and at a fast speed (2.7 m/s). No visual feedback was allowed during the target presentation or the movement. Variability in the final position of the arm endpoint did not increase with speed in either the active or the passive condition. Variability in the final values of the arm-orientation angles determining the position of the forearm and of the upper arm in space was also speed invariant. This invariance occurred despite the fact that angular velocities increased by a factor of two for all the angles involved. The speed-invariant variability supports the hypothesis that there is an absence of ongoing corrections for movements to remembered targets: in the case of a slower movement, where there is more time for movement correction, the final arm endpoint variability did not decrease. In contrast to variability in the final endpoint position, the variability in the peak tangential acceleration increased significantly with movement speed. This may imply that the nervous system adopts one of two strategies: either the final endpoint position is not encoded in terms of muscle torques or there is a special on-line mechanism that adjusts movement deceleration according to the muscle-torque variability at the initial stage of the movement. The final endpoint position was on average farther from the shoulder than the target. Constant radial-distance errors were speed dependent in both the active and the passive conditions. In the fast speed conditions, the radial distance overshoots of the targets increased. This increase in radial-distance overshoot with movement speed can be explained by the hypothesis that the final arm position is not predetermined in these experimental conditions, but is defined during the movement by a feedforward or feedback mechanism with an internal delay.  相似文献   

7.
The strategies used by the macaca monkey brain in controlling the performance of a reaching movement to a visual target have been studied by the quantitative autoradiographic 14C-DG method.Experiments on visually intact monkeys reaching to a visual target indicate that V1 and V2 convey visuomotor information to the cortex of the superior temporal and parietoccipital sulci which may encode the position of the moving forelimb, and to the cortex in the ventral part and lateral bank of the intraparietal sulcus which may encode the location of the visual target. The involvement of the medial bank of the intraparietal sulcus in proprioceptive guidance of movement is also suggested on the basis of the parallel metabolic effects estimated in this region and in the forelimb representations of the primary somatosensory and motor cortices. The network including the inferior postarcuate skeletomotor and prearcuate oculomotor cortical fields and the caudal periprincipal area 46 may participate in sensory-to-motor and oculomotor-to-skeletomotor transformations, in parallel with the medial and lateral intraparietal cortices.Experiments on split brain monkeys reaching to visual targets revealed that reaching is always controlled by the hemisphere contralateral to the moving forelimb whether it is visually intact or ‘blind'. Two supplementary mechanisms compensate for the ‘blindness' of the hemisphere controlling the moving forelimb. First, the information about the location of the target is derived from head and eye movements and is sent to the ‘blind' hemisphere via inferior parietal cortical areas, while the information about the forelimb position is derived from proprioceptive mechanisms and is sent via the somatosensory and superior parietal cortices. Second, the cerebellar hemispheric extensions of vermian lobules V, VI and VIII, ipsilateral to the moving forelimb, combine visual and oculomotor information about the target position, relayed by the ‘seeing' cerebral hemisphere, with sensorimotor information concerning cortical intended and peripheral actual movements of the forelimb, and then send this integrated information back to the motor cortex of the ‘blind' hemisphere, thus enabling it to guide the contralateral forelimb to the target.  相似文献   

8.
The ability to make accurate reaching movements toward proprioceptively defined target locations was studied in seven normal subjects who were trained to reach to five different targets in a horizontal plane, with no vision of hand or target. The task consisted of moving a handle from a fixed origin to each target location, fast and accurately. Target locations were learned in training sessions that utilized acoustic cuing. Most movements were rapid, with a bell-shaped velocity profile. The error in target reproduction, which constituted the difference between the position consciously identified as the correct target location and the real target location, was calculated in each trial. This was compared with the error in preprogrammed reaching, which constituted the difference between the point in space where the initial fast movement toward the target ended and the target location. The absence of significant differences between these two error types indicated that the transformation from an internal representation of target location into a motor program for reaching to it did not introduce an additional reaching error. Learning of target locations was done only with the right hand, yet, reaching of both hands was tested. This allowed a comparison between the subjects' ability to utilize a transformed spatial code (reaching with the untrained hand) and their ability to use a direct sensory-motor code (reaching with the trained hand). While transformation of the spatial code was found to reduce it's accuracy, utilization of this code in motor programming again did not appear to introduce an additional error.  相似文献   

9.
10.
Errors in pointing to actual and remembered targets presented in three-dimensional (3D) space in a dark room were studied under various conditions of visual feedback. During their movements, subjects either had no vision of their arms or of the target, vision of the target but not of their arms, vision of a light-emitting diode (LED) on their moving index fingertip but not of the target, or vision of an LED on their moving index fingertip and of the target. Errors depended critically upon feedback condition. 3D errors were largest for movements to remembered targets without visual feedback, diminished with vision of the moving fingertip, and diminished further with vision of the target and vision of the finger and the target. Moreover, the different conditions differentially influenced the radial distance, azimuth, and elevation errors, indicating that subjects control motion along all three axes relatively independently. The pattern of errors suggest that the neural systems that mediate processing of actual versus remembered targets may have different capacities for integrating visual and proprioceptive information in order to program spatially directed arm movements.  相似文献   

11.
A vertical asymmetry in memory-guided saccadic eye movements has been previously demonstrated in humans and in rhesus monkeys. In the upright orientation, saccades generally land several degrees above the target. The origin of this asymmetry has remained unknown. In this study, we investigated whether the asymmetry in memory saccades is dependent on body orientation in space. Thus animals performed memory saccades in four different body orientations: upright, left-side-down (LSD), right-side-down (RSD), and supine. Data in all three rhesus monkeys confirm previous observations regarding a significant upward vertical asymmetry. Saccade errors made from LSD and RSD postures were partitioned into components made along the axis of gravity and along the vertical body axis. Up/down asymmetry persisted only in body coordinates but not in gravity coordinates. However, this asymmetry was generally reduced in tilted positions. Therefore the upward bias seen in memory saccades is egocentric although orientation in space might play a modulatory role.  相似文献   

12.
The strategies used by the macaca monkey brain in controlling the performance of a reaching movement to a visual target have been studied by the quantitative autoradiographic 14C-DG method. Experiments on visually intact monkeys reaching to a visual target indicate that V1 and V2 convey visuomotor information to the cortex of the superior temporal and parietoccipital sulci which may encode the position of the moving forelimb, and to the cortex in the ventral part and lateral bank of the intraparietal sulcus which may encode the location of the visual target. The involvement of the medial bank of the intraparietal sulcus in proprioceptive guidance of movement is also suggested on the basis of the parallel metabolic effects estimated in this region and in the forelimb representations of the primary somatosensory and motor cortices. The network including the inferior postarcuate skeletomotor and prearcuate oculomotor cortical fields and the caudal periprincipal area 46 may participate in sensory-to-motor and oculomotor-to-skeletomotor transformations, in parallel with the medial and lateral intraparietal cortices. Experiments on split brain monkeys reaching to visual targets revealed that reaching is always controlled by the hemisphere contralateral to the moving forelimb whether it is visually intact or 'blind'. Two supplementary mechanisms compensate for the 'blindness' of the hemisphere controlling the moving forelimb. First, the information about the location of the target is derived from head and eye movements and is sent to the 'blind' hemisphere via inferior parietal cortical areas, while the information about the forelimb position is derived from proprioceptive mechanisms and is sent via the somatosensory and superior parietal cortices. Second, the cerebellar hemispheric extensions of vermian lobules V, VI and VIII, ipsilateral to the moving forelimb, combine visual and oculomotor information about the target position, relayed by the 'seeing' cerebral hemisphere, with sensorimotor information concerning cortical intended and peripheral actual movements of the forelimb, and then send this integrated information back to the motor cortex of the 'blind' hemisphere, thus enabling it to guide the contralateral forelimb to the target.  相似文献   

13.
The directional accuracy of pointing arm movements to remembered targets in conditions of increasing memory load was investigated using a modified version of the Sternbergs context-recall memory-scanning task. Series of 2, 3 or 4 targets (chosen randomly from a set of 16 targets around a central starting point in 2D space) were presented sequentially, followed by a cue target randomly selected from the series excluding the last one. The subject had to move to the location of the next target in the series. Correct movements were those that ended closer to the instructed target than any other target in the series while all other movements were considered as serial order errors. Increasing memory load resulted in a large decrease in the directional accuracy or equivalently in the directional information transmitted by the motor system. The constant directional error varied with target direction in a systematic fashion reproducing previous results and suggesting the same systematic distortion of the representation of direction in different memory delay tasks. The constant directional error was not altered by increasing memory load, contradicting our hypothesis that it might reflect a cognitive strategy for better remembering spatial locations in conditions of increasing uncertainty. Increasing memory load resulted in a linear increase of mean response time and variable directional error and a non-linear increase in the percentage of serial order errors. Also the percentage of serial order errors for the last presented target in the series was smaller (recency effect). The difference between serial order and directional spatial accuracy is supported by neurophysiological and functional anatomical evidence of working memory subsystems in the prefrontal cortex.This work was supported by internal funding from Aeginition University Hospital  相似文献   

14.
It has been noted that manual aiming error and variability when pointing to remembered targets increase as a function of target eccentricity. In the present study we evaluated which one of three hypotheses (target localization, motor, or movement duration) best explains this 'distance effect'. In experiment 1, older and younger participants aimed with their unseen hand at the remembered location of targets distributed between 129 and 309 mm from the starting base. Target presentation time was of either 50 or 500 ms and aiming movements could be initiated following either a 100- or a 10,000-ms recall delay. Participants had either no constraints concerning movement time or were asked to reach the near target in a longer movement time than the farther targets. The results revealed a significant distance effect when no time constraints were imposed but showed a significantly reversed distance effect when the instructions were to reach the near targets in a longer movement time than the far targets. The same results were obtained regardless of target presentation time, recall delay, or age of the participants. These results supported a movement duration interpretation of the distance effect. In experiment 2, a distance effect was replicated when pointing with one's unseen hand toward a remembered target but did not take place when pointing to visible targets. Taken together these results suggest that prolonged movement execution interferes with the stored egocentric target representation.  相似文献   

15.
We previously reported that Parkinson's disease patients could point with their eyes closed as accurately as normal subjects to targets in three-dimensional space that were initially presented with full vision. We have now further restricted visual information in order to more closely examine the individual and combined influences of visual information, proprioceptive feedback, and spatial working memory on the accuracy of Parkinson's disease patients. All trials were performed in the dark. A robot arm presented a target illuminated by a light-emitting diode at one of five randomly selected points composing a pyramidal array. Subjects attempted to "touch" the target location with their right finger in one smooth movement in three conditions: dark, no illumination of arm or target during movement; movement was to the remembered target location after the robot arm retracted; finger, a light-emitting diode on the pointing fingertip was visible during the movement but the target was extinguished; again, movement was to the remembered target location; and target, the target light-emitting diode remained in place and visible throughout the trial but there was no vision of the arm. In the finger condition, there is no need to use visual-proprioceptive integration, since the continuously visualized fingertip position can be compared to the remembered location of the visual target. In the target condition, the subject must integrate the current visible target with arm proprioception, while in the dark condition, the subject must integrate current proprioception from the arm with the remembered visual target. Parkinson's disease patients were significantly less accurate than controls in both the dark and target conditions, but as accurate as controls in the finger condition. Parkinson's disease patients, therefore, were selectively impaired in those conditions (target and dark) which required integration of visual and proprioceptive information in order to achieve accurate movements. In contrast, the patients' normal accuracy in the finger condition indicates that they had no substantial deficits in their relevant spatial working memory. Final arm configurations were significantly different in the two subject groups in all three conditions, even in the finger condition where mean movement endpoints were not significantly different. Variability of the movement endpoints was uniformly increased in Parkinson's disease patients across all three conditions.The current study supports an important role for the basal ganglia in the integration of proprioceptive signals with concurrent or remembered visual information that is needed to guide movements. This role can explain much of the patients' dependence on visual information for accuracy in targeted movements. It also underlines what may be an essential contribution of the basal ganglia to movement, the integration of afferent information that is initially processed through multiple, discrete modality-specific pathways, but which must be combined into a unified and continuously updated spatial model for effective, accurate movement.  相似文献   

16.
The dorsal and ventral streams model of action and perception suggests that reaching to grasp a tool for use involves integrated operation of the two streams. Few attempts have been made to test the limits of this integration in normal subjects. Twenty normal subjects reached for tools or geometric objects which were rotated rapidly during reaching or immediately beforehand. In a first experiment it was shown that reaching for an inverted tool was slower than reaching for objects which required hand inversion due to proximity to a physical barrier. Also, for the right hand, tool rotation during reaching provoked a higher incidence of hand rotation in the wrong direction than did rotation of objects. In a second similar experiment, hand inversion when grasping objects was induced by the need to plan a future action rather than by proximity of a physical barrier. Despite this balancing of complexity of postural planning for tools and objects, hand rotation errors for both hands were more common for tools than objects. This was consistent with the two-stream model in suggesting that there was a process which produced rapid online tracking of stimulus rotation and this had to be overcome by a slower process which dictated grasping in accordance with knowledge of tool use.  相似文献   

17.
Control of the spatial orientation of the hand is an important component of reaching and grasping movements. We studied the contribution of vision and proprioception to the perception and control of hand orientation in orientation-matching and letter-posting tasks. In the orientation-matching task, subjects aligned a "match" handle to a "target" handle that was fixed in different orientations. In letter-posting task 1, subjects simultaneously reached and rotated the right hand to insert a match handle into a target slot fixed in the same orientations. Similar sensory conditions produced different error patterns in the two tasks. Furthermore, without vision of the hand, final hand-orientation errors were smaller overall in letter-posting task 1 than in the orientation-matching task. In letter-posting task 2, subjects first aligned their hand to the angle of the target and then reached to it with the instruction not to change their initial hand orientation. Nevertheless, hand orientation changed during reaching in a way that reduced the initial orientation errors. This did not occur when there was no explicitly defined target toward which the subjects reached (letter-posting task 3). The reduction in hand-orientation errors during reach, even when told not to change it, suggests the engagement of an automatic error correction mechanism for hand orientation during reaching movements toward stationary targets. The correction mechanism was engaged when the task involved transitive actions directed at the target object. The on-line adjustments can occur without vision of the hand and even when target orientation is defined only by proprioceptive inputs.  相似文献   

18.
Gaze, the direction of the visual axis in space, is the sum of the eye position relative to the head (E) plus head position relative to space (H). In the old explanation, which we call the oculocentric motor strategy, of how a rapid orienting gaze shift is controlled, it is assumed that 1) a saccadic eye movement is programmed with an amplitude equal to the target's offset angle, 2) this eye movement is programmed without reference to whether a head movement is planned, 3) if the head turns simultaneously the saccade is reduced in size by an amount equal to the head's contribution, and 4) the saccade is attenuated by the vestibuloocular reflex (VOR) slow phase. Humans have an oculomotor range (OMR) of about +/- 55 degrees. The use of the oculocentric motor strategy to acquire targets lying beyond the OMR requires programming saccades that cannot be made physically. We have studied in normal human subjects rapid horizontal gaze shifts to visible and remembered targets situated within and beyond the OMR at offsets ranging from 30 to 160 degrees. Heads were attached to an apparatus that permitted short unexpected perturbations of the head trajectory. The acceleration and deceleration phases of the head perturbation could be timed to occur at different points in the eye movement. 4. Single-step rapid gaze shifts of all sizes up to at least 160 degrees (the limit studied) could be accomplished with the classic single-eye saccade and an accompanying saccadelike head movement. In gaze shifts less than approximately 45 degrees, when head motion was prevented totally by the brake, the eye attained the target. For larger target eccentricities the gaze shift was interrupted by the brake and the average eye saccade amplitude was approximately 45 degrees, well short of the OMR. Thus saccadic eye movement amplitude was neurally, not mechanically, limited. When the head's motion was not perturbed by the brake, the eye saccade amplitude was a function of head velocity: for a given target offset, the faster the head the smaller the saccade. For gaze shifts to targets beyond the OMR and when head velocity was low, the eye frequently attained the 45 degrees position limit and remained there, immobile, until gaze attained the target.(ABSTRACT TRUNCATED AT 400 WORDS)  相似文献   

19.
Visuospatial information regarding obstacles and other environmental constraints on limb movement is essential for the successful planning and execution of stepping movements. Visuospatial control strategies used during gait and volitional stepping have been studied extensively; however, the visuospatial strategies that are used when stepping rapidly to recover balance in response to sudden postural perturbation are not well established. To study this, rapid forward stepping reactions were evoked by unpredictable support-surface acceleration while subjects stood amid multiple obstacles that moved intermittently and unpredictably prior to perturbation onset (PO). To prevent predictive control, subjects performed only one trial (their very first exposure to the perturbation and environment). Visual scanning of the obstacles and surroundings occurred prior to PO in all subjects; however, gaze was never redirected at the obstacles, step foot or landing site in response to the perturbation. Surprisingly, the point of gaze at time of foot-contact was consistently and substantially anterior to the step-landing site. Despite the apparent absence of ‘online’ visual feedback related to the foot movement, the compensatory step avoided obstacle contact in 10 of 12 young adults and 9 of 10 older subjects. The results indicate that the balance-recovery reaction was typically modulated on the basis of visuospatial environmental information that was acquired and continually updated prior to perturbation, as opposed to a strategy based on ‘online’ visual control. The capacity to do this was not adversely affected by aging, despite a tendency for older subjects to look downward less frequently than young adults.  相似文献   

20.
Summary The spatial and temporal organixation of unrestricted limb movements directed to small visual targets was examined in two separate experiments. Videotape records of the subjects' performance allowed us to analyze the trajectory of the limb movement through 3-dimensional space. Horizontal eye movements during reaching were measured by infrared corneal reflection. In both experiments, the trajectories of the different reaches approximated straight line paths and the velocity profile revealed an initial rapid acceleration followed by a prolonged period of deceleration. In Experiment 1, in which the target light was presented to the right or left of a central fixation point at either 10° or 20° eccentricity, the most consistent differences were observed between reaches directed across the body axis to targets presented in the contralateral visual field and reaches directed at ipsilateral targets. Ipsilateral reaches were initiated more quickly, were completed more rapidly, and were more accurate than contralateral reaches. While these findings suggest that hemispherically organized neural systems are involved in the programming of visually guided limb movements, it was not clear whether the inefficiency of the contralateral movements was due to reaching across the body axis or reaching into the visual hemifield contralateral to the hand being used. Therefore, in Experiment 2, the position of the fixation point was varied such that the effects of visual field and body axis could be disembedded. In this experiment, the kinematics of the reaching movement were shown to be independent of the point of visual fixation and varied only as a function of the laterality of the target position relative to the body axis. This finding suggests that the kinematics of a reaching movement are determined by differences in the processing of neural systems associated with motor output, after the target has been localized in space. The effect of target laterality on response latency and accuracy, however, could not be attributed to a single frame of reference, or to a simple additive effect of both. These findings illustrate the complex integration of visual spatial information which must take place in order to reach accurately to goal objects in extrapersonal space. Comparison of ocular and manual performance revealed a close relationship between movement latency for both motor systems. Thus, rightward-going eye movements to a given target were initiated more quickly when accompanied by reaches with the right hand than when they were accompanied by reaches with the left hand. The finding that the latency of eye movements in one direction was influenced by which hand was being used to reach suggests that reaching toward a target under visual control involves a common integration of both eye and hand movements.This study was supported by grant no. MA-7269 from the Medical Research Council of Canada to M. A. Goodale  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号