首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
2.
The abilities of human subjects to perform reach and grasp movements to remembered locations/ orientations of a cylindrical object were studied under four conditions: (1) visual presentation of the object — reach with vision allowed; (2) visual presentation — reach while blindfolded; (3) kinesthetic presentation of the object-reach while blindfolded and (4) kinesthetic presentation-reach with vision. The results showed that subjects were very accurate in locating the object in the purely kinesthetic condition and that directional errors were low in all four conditions; but, predictable errors in reach distance occurred in conditions 1,2, and 4. The pattern of these distance errors was similar to that identified in previous research using a pointing task to a small target (i.e., overshoots of close targets, undershoots of far targets). The observation that the pattern of distance errors in condition 4 was similar to that of conditions 1 and 2 suggests that subjects transform kinesthetically defined hand locations into a visual coordinate system when vision is available during upper limb motion to a remembered kinesthetic target. The differences in orientation of the upper limb between target and reach positions in condition 3 were similar in magnitude to the errors associated with kinesthetic perceptions of arm and hand orientations in three-dimensional space reported in previous studies. However, fingertip location was specified with greater accuracy than the orientation of upper limb segments. This was apparently accomplished by compensation of variations in shoulder (arm) angles with oppositely directed variations in elbow joint angles. Subjects were also able to transform visually perceived object orientation into an appropriate hand orientation for grasp, as indicated by the relation between hand roll angle and object orientation (elevation angle). The implications of these results for control of upper limb motion to external targets are discussed.  相似文献   

3.
The goal of this study was to determine whether the sensory nature of a target influences the roles of vision and proprioception in the planning of movement distance. Two groups of subjects made rapid, elbow extension movements, either toward a visual target or toward the index fingertip of the unseen opposite hand. Visual feedback of the reaching index fingertip was only available before movement onset. Using a virtual reality display, we randomly introduced a discrepancy between actual and virtual (cursor) fingertip location. When subjects reached toward the visual target, movement distance varied with changes in visual information about initial hand position. For the proprioceptive target, movement distance varied mostly with changes in proprioceptive information about initial position. The effect of target modality was already present at the time of peak acceleration, indicating that this effect include feedforward processes. Our results suggest that the relative contributions of vision and proprioception to motor planning can change, depending on the modality in which task relevant information is represented.  相似文献   

4.
We previously reported that Parkinson's disease patients could point with their eyes closed as accurately as normal subjects to targets in three-dimensional space that were initially presented with full vision. We have now further restricted visual information in order to more closely examine the individual and combined influences of visual information, proprioceptive feedback, and spatial working memory on the accuracy of Parkinson's disease patients. All trials were performed in the dark. A robot arm presented a target illuminated by a light-emitting diode at one of five randomly selected points composing a pyramidal array. Subjects attempted to "touch" the target location with their right finger in one smooth movement in three conditions: dark, no illumination of arm or target during movement; movement was to the remembered target location after the robot arm retracted; finger, a light-emitting diode on the pointing fingertip was visible during the movement but the target was extinguished; again, movement was to the remembered target location; and target, the target light-emitting diode remained in place and visible throughout the trial but there was no vision of the arm. In the finger condition, there is no need to use visual-proprioceptive integration, since the continuously visualized fingertip position can be compared to the remembered location of the visual target. In the target condition, the subject must integrate the current visible target with arm proprioception, while in the dark condition, the subject must integrate current proprioception from the arm with the remembered visual target. Parkinson's disease patients were significantly less accurate than controls in both the dark and target conditions, but as accurate as controls in the finger condition. Parkinson's disease patients, therefore, were selectively impaired in those conditions (target and dark) which required integration of visual and proprioceptive information in order to achieve accurate movements. In contrast, the patients' normal accuracy in the finger condition indicates that they had no substantial deficits in their relevant spatial working memory. Final arm configurations were significantly different in the two subject groups in all three conditions, even in the finger condition where mean movement endpoints were not significantly different. Variability of the movement endpoints was uniformly increased in Parkinson's disease patients across all three conditions.The current study supports an important role for the basal ganglia in the integration of proprioceptive signals with concurrent or remembered visual information that is needed to guide movements. This role can explain much of the patients' dependence on visual information for accuracy in targeted movements. It also underlines what may be an essential contribution of the basal ganglia to movement, the integration of afferent information that is initially processed through multiple, discrete modality-specific pathways, but which must be combined into a unified and continuously updated spatial model for effective, accurate movement.  相似文献   

5.
The role of the basal ganglia in the coordination of different body segments and utilization of motor synergies was investigated by analyzing reaching movements to remembered three-dimensional (3D) targets in patients with Parkinson's disease (PD). Arm movements were produced alone or in combination with a forward bending of the trunk, with or without visual feedback. Movements in PD patients were more temporally segmented, as evidenced by irregular changes in tangential velocity profiles. In addition, the relative timing in the onsets and offsets of fingertip and trunk motions were substantially different in PD patients than in control subjects. While the control subjects synchronized both onsets and offsets, the PD patients had large mean intervals between the onsets and offsets of the fingertip and trunk motions. Moreover, PD patients showed substantially larger trial-to-trial variability in these intervals. The degree of synchronization in PD patients gradually increased during the movement under the influence of visual feedback. The mean and variability of the intersegmental intervals decreased as the fingertip approached the target. This improvement in timing occurred even though the separate variability in the timing of arm and trunk motions was not reduced by vision. In combined movements, even without vision, the PD patients were able to achieve normal accuracy, suggesting they were able to use the same movement synergies as normals to control the multiple degrees of freedom involved in the movements and to compensate for the added trunk movement. However, they were unable to recruit these synergies in the stereotyped manner characteristic of healthy subjects. These results suggest that the basal ganglia are involved in the temporal coordination of movement of different body segments and that related timing abnormalities may be partly compensated by vision. Abnormal intersegmental timing may be a highly sensitive indicator of a deficient ability to assemble complex movements in patients with basal-ganglia dysfunction. This abnormality may be apparent even when the overall movement goal of reaching a target is preserved and normal movement synergies appear to be largely intact.  相似文献   

6.
We examined the role of gaze in a task where subjects had to reproduce the position of a remembered visual target with the tip of the index finger, referred to as pointing. Subjects were tested in 3 visual feedback conditions: complete darkness (dark), complete darkness with visual feedback of the finger position (finger), and with vision of a well-defined environment and feedback of the finger position (frame). Pointing accuracy increases with feedback about the finger or visual environment. In the finger and frame conditions, the 95% confidence regions of the variable errors have an ellipsoidal distribution with the main axis oriented toward the subjects' head. During the 1-s period when the target is visible, gaze is almost on target. However, gaze drifts away from the target relative to the subject in the delay period after target disappearance. In the finger and frame conditions, gaze returns toward the remembered target during pointing. In all 3 feedback conditions, the correlations between the variable errors of gaze and pointing position increase during the delay period, reaching highly significant values at the time of pointing. Our results demonstrate that gaze affects the accuracy of pointing. We conclude that the covariance between gaze and pointing position reflects a common drive for gaze and arm movements and an effect of gaze on pointing accuracy at the time of pointing. Previous studies interpreted the orientation of variable errors as indicative for a frame of reference used for pointing. Our results suggest that the orientation of the error ellipses toward the head is at least partly the result of gaze drift in the delay period.  相似文献   

7.
1. The accuracy with which subjects pointed to targets in extrapersonal space was assessed under a variety of experimental conditions. 2. When subjects pointed in the dark to remembered target locations, they made substantial errors. Errors in distance, measured from the shoulder to the target, were sometimes as much as 15 cm. Errors in direction, also measured from the shoulder, were smaller. 3. An analysis of the information transmitted by the location of the subject's finger about the location of the target showed that the information about the target's distance was consistently lower than the information about its direction. 4. The errors in distance persisted when subjects had their arm in view and pointed in the light to remembered target locations. 5. The errors were much smaller when subjects used a pointer to point to the target or when they were asked to reproduce the position of their finger after it had been passively moved to the target. 6. From these findings we conclude that subjects have a reasonably accurate visual representation of target location and are able to effectively use kinesthetically derived information about target location. We therefore suggest that errors in pointing result from errors in the sensorimotor transformation from the visual representation of the target location to the kinematic representation of the arm movement.  相似文献   

8.
How visual feedback contributes to the on-line control of fast reaching movements is still a matter of considerable debate. Whether feedback is used continuously throughout movements or only in the "slow" end-phases of movements remains an open question. In order to resolve this question, we applied a perturbation technique to measure the influence of visual feedback from the hand at different times during reaching movements. Subjects reached to touch targets in a virtual 3D space, with visual feedback provided by a small virtual sphere that moved with a subject's fingertip. Small random perturbations were applied to the position of the virtual fingertip at two different points in the movement, either at 25% or 50% of the total movement extent. Despite the fact that subjects were unaware of the perturbations, their hand trajectories showed smooth and accurate corrections. Detectable responses were observed within an average of 160 ms after perturbations, and as early as 60% of the distance to the target. Response latencies were constant across different perturbation times and movement speed conditions, suggesting that a fixed sensori-motor delay is the limiting factor. The results provide direct evidence that the human brain uses visual feedback from the hand in a continuous fashion to guide fast reaching movements throughout their extent.  相似文献   

9.
We have assessed the contribution made by retinal and extraretinal signals when subjects used their hand to track targets moving at constant velocities. Comparisons were made between responses produced under the following conditions: (1) with full vision of the hand and unrestricted movement of the eyes, (2) without vision of the hand or (3) while visually fixating a stationary LED. Target velocity was varied in a pseudo-random order across trials. In each condition response latency decreased as target velocity was increased. There was a 24 ms increase in latency when vision of the hand was removed or eye movements were restricted. Under normal conditions, subjects were able to accurately catch up to and match target velocity with their hand. When vision of the hand was removed, subjects lagged behind the target but were able to match target velocity. This deficit was eliminated when vision of the hand was made available for the beginning of the response. When subjects were required to visually fixate they could catch up to the target with their hand, but subsequently produced a steady state hand velocity that was greater than target velocity. When the LED was positioned such that the target started in the peripheral visual field, the overestimation of target velocity was evident from the beginning of the response: subjects produced initial accelerations with their hand that were significantly greater than in normal conditions. Finally, normal responses were produced when subjects were required to visually pursue a second target that moved at the same speed and in the same direction as the main target. When the velocities of these two targets differed, subjects produced hand movements that were initially more appropriate for the target being visually pursued. Together these results suggest that vision of the hand and how it is initially positioned relative to the target is necessary to catch up to the target; whereas the extraretinal signal concerned with eye velocity is required to produce an accurate steady state hand velocity.  相似文献   

10.
Binocular vision provides important advantages for controlling reach-to-grasp movements. We examined the possible source(s) of these advantages by comparing prehension proficiency under four different binocular viewing conditions, created by randomly placing a neutral lens (control), an eight dioptre prism (Base In or Base Out) or a low-power (2.00–3.75 dioptre) Plus lens over the eye opposite the moving limb. The Base In versus Base Out prisms were intended to selectively alter vergence-specified distance (VSD) information, such that the targets appeared beyond or closer than their actual physical position, respectively. The Plus lens was individually tailored to reduce each subject’s disparity sensitivity (to 400–800 arc s), while minimizing effects on distance estimation. In pre-testing, subjects pointed (without visual feedback) to mid-line targets at different distances, and produced the systematic directional errors expected of uncorrected movements programmed under each of the perturbed conditions. For the prehension tasks, subjects reached and precision grasped (with visual feedback available) cylindrical objects (two sizes and three locations), either following a 3 s preview in which to plan their actions or immediately after the object became visible. Viewing condition markedly affected performance, but the planning time allowed did not. Participants made the most errors suggesting premature collision with the object (shortest ‘braking’ times after peak deceleration; fastest velocity and widest grip at initial contact) under Base In prism viewing, consistent with over-reaching movements programmed to transport the hand beyond the actual target due to its ‘further’ VSD. Conversely, they produced the longest terminal reaches and grip closure times, with multiple corrections just before and after object contact, under the Plus lens (reduced disparity) condition. Base Out prism performance was intermediate between these two, with significant increases in additional forward movements during the transport end-phase, indicative of initial under-reaching in response to the target’s ‘nearer’ VSD. Our findings suggest dissociations between the role of vergence and binocular disparity in natural prehension movements, with vergence contributing mainly to reach planning and high-grade disparity cues providing particular advantages for grasp-point selection during grip programming and application.  相似文献   

11.
We have investigated pointing movements toward remembered targets after an intervening self-generated body movement. We tested to what extent visual information about the environment or finger position is used in updating target position relative to the body after a step and whether gaze plays a role in the accuracy of the pointing movement. Subjects were tested in three visual conditions: complete darkness (DARK), complete darkness with visual feedback of the finger (FINGER), and with vision of a well-defined environment and with feedback of the finger (FRAME). Pointing accuracy was rather poor in the FINGER and DARK conditions, which did not provide vision of the environment. Constant pointing errors were mainly in the direction of the step and ranged from about 10 to 20 cm. Differences between binocular fixation and target position were often related to the step size and direction. At the beginning of the trial, when the target was visible, fixation was on target. After target extinction, fixation moved away from the target relative to the subject. The variability in the pointing positions appeared to be related to the variable errors in fixation, and the co-variance increases during the delay period after the step, reaching a highly significant value at the time of pointing. The significant co-variance between fixation position and pointing is not the result of a mutual dependence on the step, since we corrected for any direct contributions of the step in both signals. We conclude that the co-variance between fixation and pointing position reflects 1) a common command signal for gaze and arm movements and 2) an effect of fixation on pointing accuracy at the time of pointing.  相似文献   

12.
Ren et al. (J Neurophysiol 96:1464–1477, 2006) found that saccades to visual targets became less accurate when somatosensory information about hand location was added, suggesting that saccades rely mainly on vision. We conducted two kinematic experiments to examine whether or not reaching movements would also show such strong reliance on vision. In Experiment 1, subjects used their dominant right hand to perform reaches, with or without a delay, to an external visual target or to their own left fingertip positioned either by the experimenter or by the participant. Unlike saccades, reaches became more accurate and precise when proprioceptive information was available. In Experiment 2, subjects reached toward external or bodily targets with differing amounts of visual information. Proprioception improved performance only when vision was limited. These results indicate that the reaching system has a better internal model for limb positions than does the saccade system.  相似文献   

13.
Subjects who are in an enclosed chamber rotating at constant velocity feel physically stationary but make errors when pointing to targets. Reaching paths and endpoints are deviated in the direction of the transient inertial Coriolis forces generated by their arm movements. By contrast, reaching movements made during natural, voluntary torso rotation seem to be accurate, and subjects are unaware of the Coriolis forces generated by their movements. This pattern suggests that the motor plan for reaching movements uses a representation of body motion to prepare compensations for impending self-generated accelerative loads on the arm. If so, stationary subjects who are experiencing illusory self-rotation should make reaching errors when pointing to a target. These errors should be in the direction opposite the Coriolis accelerations their arm movements would generate if they were actually rotating. To determine whether such compensations exist, we had subjects in four experiments make visually open-loop reaches to targets while they were experiencing compelling illusory self-rotation and displacement induced by rotation of a complex, natural visual scene. The paths and endpoints of their initial reaching movements were significantly displaced leftward during counterclockwise illusory rotary displacement and rightward during clockwise illusory self-displacement. Subjects reached in a curvilinear path to the wrong place. These reaching errors were opposite in direction to the Coriolis forces that would have been generated by their arm movements during actual torso rotation. The magnitude of path curvature and endpoint errors increased as the speed of illusory self-rotation increased. In successive reaches, movement paths became straighter and endpoints more accurate despite the absence of visual error feedback or tactile feedback about target location. When subjects were again presented a stationary scene, their initial reaches were indistinguishable from pre-exposure baseline, indicating a total absence of aftereffects. These experiments demonstrate that the nervous system automatically compensates in a context-specific fashion for the Coriolis forces associated with reaching movements.  相似文献   

14.
Summary 1. The control of pointing arm movements in the absence of visual guidance was investigated in unpracticed human subjects. The right arm grasped a lever which restricted the movement of the right index fingertip to a horizontal arc, centered between the axes of eye rotation. A horizontal panel directly above the arm prevented visual feedback of the movement. Visual stimuli were presented in discrete positions just above panel and fingertip. A flag provided visual feedback on fingertip position before each pointing movement (Exp. A and B), or before a movement sequence (Exp. C). 2. When subjects pointed from straight ahead to eccentric stimulus positions (Exp. A), systematic and variable pointing errors were observed; both kinds of errors increased with stimulus eccentricity. When subjects pointed from 30 deg left to stimuli located further right (Exp. B), errors increased with stimulus position to the right. Taken together, these findings suggest that pointing accuracy depends not primarily on stimulus position, but rather on required movement amplitude. 3. When subjects performed sequences of unidirectional movements (Exp. C), systematic and variable errors increased within the sequence. A quantitative analysis revealed that this increase can be best described as an accumulation of successive pointing errors. 4. We conclude that both findings, error increase with amplitude, and accumulation of successive errors, when considered together strongly support the hypothesis that amplitude, rather than final position, is the controlled variable of the investigated movements.  相似文献   

15.
Do people perform a given motor task differently if it is easy than if it is difficult? To find out, we asked subjects to intercept moving virtual targets by tapping on them with their fingers. We examined how their behaviour depended on the required precision. Everything about the task was the same on all trials except the extent to which the fingertip and target had to overlap for the target to be considered hit. The target disappeared with a sound if it was hit and deflected away from the fingertip if it was missed. In separate sessions, the required precision was varied from being quite lenient about the required overlap to being very demanding. Requiring a higher precision obviously decreased the number of targets that were hit, but it did not reduce the variability in where the subjects tapped with respect to the target. Requiring a higher precision reduced the systematic deviations from landing at the target centre and the lag-one autocorrelation in such deviations, presumably because subjects received information about smaller deviations from hitting the target centre. We found no evidence for lasting effects of training with a certain required precision. All the results can be reproduced with a model in which the precision of individual movements is independent of the required precision, and in which feedback associated with missing the target is used to reduce systematic errors. We conclude that people do not approach this motor task differently when it is easy than when it is difficult.  相似文献   

16.
Touch typing movements are typically too brief to use on-line feedback. Yet, previous studies have shown that blocking tactile feedback of the fingertip of typists leads to an increase in typing errors. To determine the contribution of tactile information to rapid fine motor skills, we analyzed kinematics of the right index finger during typing with and without tactile feedback. Twelve expert touch typists copy-typed sentences on a computer keyboard without vision of their hands or the computer screen. Following control trials, their right index fingertip was anesthetized, and sentences were typed again. The movements of the finger were recorded with an instrumented glove and electromagnetic position sensor. During anesthesia, typing errors of that finger increased sevenfold. While the inter-keypress timing and average kinematics were unaffected, there was an increase in variability of all measures. Regression analysis showed that endpoint variability was largely accounted for by start location variability. The results suggest that tactile cues provide information about the start location of the finger, which is necessary to perform typing movements accurately.  相似文献   

17.
The accuracy of visually guided pointing movements decreases with speed. We have shown that for movements to a visually defined remembered target, the variability of the final arm endpoint position does not depend on movement speed. We put forward a hypothesis that this observation can be explained by suggesting that movements directed at remembered targets are produced without ongoing corrections. In the present study, this hypothesis was tested for pointing movements in 3D space to kinesthetically defined remembered targets. Passive versus active acquisition of kinesthetic information was contrasted. Pointing errors, movement kinematics, and joint-angle coordination were analyzed. The movements were performed at a slow speed (average peak tangential velocity of about 1.2 m/s) and at a fast speed (2.7 m/s). No visual feedback was allowed during the target presentation or the movement. Variability in the final position of the arm endpoint did not increase with speed in either the active or the passive condition. Variability in the final values of the arm-orientation angles determining the position of the forearm and of the upper arm in space was also speed invariant. This invariance occurred despite the fact that angular velocities increased by a factor of two for all the angles involved. The speed-invariant variability supports the hypothesis that there is an absence of ongoing corrections for movements to remembered targets: in the case of a slower movement, where there is more time for movement correction, the final arm endpoint variability did not decrease. In contrast to variability in the final endpoint position, the variability in the peak tangential acceleration increased significantly with movement speed. This may imply that the nervous system adopts one of two strategies: either the final endpoint position is not encoded in terms of muscle torques or there is a special on-line mechanism that adjusts movement deceleration according to the muscle-torque variability at the initial stage of the movement. The final endpoint position was on average farther from the shoulder than the target. Constant radial-distance errors were speed dependent in both the active and the passive conditions. In the fast speed conditions, the radial distance overshoots of the targets increased. This increase in radial-distance overshoot with movement speed can be explained by the hypothesis that the final arm position is not predetermined in these experimental conditions, but is defined during the movement by a feedforward or feedback mechanism with an internal delay.  相似文献   

18.
People naturally direct their gaze to visible hand movement goals. Doing so improves reach accuracy through use of signals related to gaze position and visual feedback of the hand. Here, we studied where people naturally look when acting on remembered target locations. Four targets were presented on a screen, in peripheral vision, while participants fixed a central cross (encoding phase). Four seconds later, participants used a pen to mark the remembered locations while free to look wherever they wished (recall phase). Visual references, including the screen and the cross, were present throughout. During recall, participants neither looked at the marked locations nor prevented eye movements. Instead, gaze behavior was erratic and was comprised of gaze shifts loosely coupled in time and space with hand movements. To examine whether eye and hand movements during encoding affected gaze behavior during recall, in additional encoding conditions, participants marked the visible targets with either free gaze or with central cross fixation or just looked at the targets. All encoding conditions yielded similar erratic gaze behavior during recall. Furthermore, encoding mode did not influence recall performance, suggesting that participants, during recall, did not exploit sensorimotor memories related to hand and gaze movements during encoding. Finally, we recorded a similar lose coupling between hand and eye movements during an object manipulation task performed in darkness after participants had viewed the task environment. We conclude that acting on remembered versus visible targets can engage fundamentally different control strategies, with gaze largely decoupled from movement goals during memory-guided actions.  相似文献   

19.
The purpose of this research was to determine the cortical circuit involved in encoding and controlling kinesthetically guided reaching movements. We used 15O-butanol positron emission tomography in ten blindfolded able-bodied volunteers in a factorial experiment in which arm (left/right) used to encode target location and to reach back to the remembered location and hemispace of target location (left/right side of midsagittal plane) varied systematically. During encoding of a target the experimenter guided the hand to touch the index fingertip to an external target and then returned the hand to the start location. After a short delay the subject voluntarily moved the same hand back to the remembered target location. SPM99 analysis of the PET data contrasting left versus right hand reaching showed increased (P < 0.05, corrected) neural activity in the sensorimotor cortex, premotor cortex and posterior parietal lobule (PPL) contralateral to the moving hand. Additional neural activation was observed in prefrontal cortex and visual association areas of occipital and parietal lobes contralateral and ipsilateral to the reaching hand. There was no statistically significant effect of target location in left versus right hemispace nor was there an interaction of hand and hemispace effects. Structural equation modeling showed that parietal lobe visual association areas contributed to kinesthetic processing by both hands but occipital lobe visual areas contributed only during dominant hand kinesthetic processing. This visual processing may also involve visualization of kinesthetically guided target location and use of the same network employed to guide reaches to visual targets when reaching to kinesthetic targets. The present work clearly demonstrates a network for kinesthetic processing that includes higher visual processing areas in the PPL for both upper limbs and processing in occipital lobe visual areas for the dominant limb.  相似文献   

20.
Manipulation of objects around the head requires an accurate and stable internal representation of their locations in space, also during movements such as that of the eye or head. For far space, the representation of visual stimuli for goal-directed arm movements relies on retinal updating, if eye movements are involved. Recent neurophysiological studies led us to infer that a transformation of visual space from retinocentric to a head-centric representation may be involved for visual objects in close proximity to the head. The first aim of this study was to investigate if there is indeed such a representation for remembered visual targets of goal-directed arm movements. Participants had to point toward an initially foveated central target after an intervening saccade. Participants made errors that reflect a bias in the visuomotor transformation that depends on eye displacement rather than any head-centred variable. The second issue addressed was if pointing toward the centre of a wide-field expanding motion pattern involves a retinal updating mechanism or a transformation to a head-centric map and if that process is distance dependent. The same pattern of pointing errors in relation to gaze displacement was found independent of depth. We conclude that for goal-directed arm movements, representation of the remembered visual targets is updated in a retinal frame, a mechanism that is actively used regardless of target distance, stimulus characteristics or the requirements of the task.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号