首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Perception of forearm angles in 3-dimensional space   总被引:1,自引:1,他引:0  
Summary The purpose of this study was to determine a preferred coordinate system for representation of forearm orientation in 3-dimensional space. In one experiment, the ability of human subjects to perceive angles of the forearm in 3-dimensional space (forearm elevation and yaw — extrinsic coordinate system) was compared to their ability to perceive elbow joint angle (intrinsic coordinate system). While blindfolded, subjects performed an angle reproduction task in which the experimenter first positioned the upper limb in a reference trial. This was followed, after movement of the subject's entire upper limb to a different position, by an attempt to reproduce or match a criterior angle of the reference trial by motion of the forearm in elbow flexion or extension only. Note that matching of the criterion forearm angle in the new upper limb position could not be accomplished by reproducing the entire reference upper limb position, but only by angular motion at the elbow. Matching of all 3 criterion angles was accomplished with about equal accuracy in terms of absolute constant errors and variable errors. Correlation analysis of the perceptual errors showed that forearm elevation and elbow angle perception errors were not biased but that forearm yaw angle matching showed a bias toward elbow angle matching in 7 of 9 subjects. That is, errors in forearm yaw perception were attributed to a tendency toward a preferred intrinsic coordinate system for perception of forearm orientation. These results show that subjects can accurately perceive angles in both extrinsic and intrinsic coordinate systems in 3-dimensional space. Thus, these data conflict with previous reports of highly inaccurate perception of elbow joint angles in comparison to perception of forearm elevation. In an attempt to resolve this conflict with previous results, a second experiment was carried out in which perception of forearm elevation and elbow joint angles with the forearm motion constrained to a vertical plane. Results of this experiment showed that during a two-limb elbow angle matching task, four of five subjects exhibited a clear bias toward forearm elevation angle. During a one-limb angle reproduction task only two of five subjects exhibited such a bias. Perception of elevation angles show little bias toward elbow angle matching. These results indicate that use of tasks in which the limb is supported against gravity and motion is constrained to a vertical plane cause subjects to make perceptual errors during elbow angle matching such that the slopes of the forearms in a vertical plane (elevation angles) are more easily matched. It is concluded that human subjects can use both extrinsic and intrinsic coordinate systems in planning movements. Kinematic aspects may be planned in terms of an extrinsic coordinate system because of the use of vision in specifying location of external targets, but kinetic aspects of movement planning probably requires use of both forearm elevation angles and elbow joint angles to accurately specify forces and torques for muscles spanning the elbow.  相似文献   

2.
The purpose of this investigation was to determine the preferred coordinate system for perception of arm (humerus) orientation in three-dimensional space. Perception of arm orientation relative to trunk fixed versus earth-fixed axes were compared in seven human subjects. The experimenter first moved the subject's trunk and arm into a target configuration (in which the arm's orientation relative to the trunk and/or earth was perceived and memorized by the subject) and then moved the trunk and arm to a new configuration. The blindfolded subject then attempted to reproduce the target orientation of their arm relative to either the trunk (i.e., reproduce shoulder angles intrinsic kinesthetic coordinate system) or earth-fixed axes (extrinsic kinesthetic coordinate system). Perceptual errors were similar for both shoulder (arm relative to trunk) and extrinsic (arm relative to earth) angles. However, elevation angles were perceived with greater accuracy than yaw angles in the two coordinate systems. Also, perceptual errors for arm yaw angles in the extrinsic kinesthetic coordinate system task were better predicted from changes in trunk orientation than the errors for other angles. Furthermore, four subjects matched arm yaw angle relative to the trunk-fixed axis more accurately than to the earth-fixed axis in the extrinsic coordinate system task. These results suggests a bias toward perception of yaw angles relative to trunk-fixed axes (i.e., in an intrinsic coordinate system). These data suggest that the preferred coordinate system for kinesthetic perception of arm orientation is probably fixed in the trunk. Sensory receptors in soft tissues surrounding the shoulder joint can provide sensations related directly to intrinsic (shoulder) angles, but not to angles of the arm in relation to external axes. However, elevation angles of the arm are perceived with about equal accuracy in relation to the trunk and the gravitational axis. Accurate perceptions of the angle of the arm with respect to gravity may be important for computations of the shoulder joint torques needed when producing upper limb movements.  相似文献   

3.
The aim of this study was to further understand how the brain represents spatial information for shaping aiming movements to targets. Both behavioral and neurophysiological studies have shown that the brain represents spatial memory for reaching targets in an eye-fixed frame. To date, these studies have only shown how the brain stores and updates target locations for generating a single arm movement. But once a target's location has been computed relative to the hand to program a pointing movement, is that information reused for subsequent movements to the same location? Or is the remembered target location reconverted from eye to motor coordinates each time a pointing movement is made? To test between these two possibilities, we had subjects point twice to the remembered location of a previously foveated target after shifting their gaze to the opposite side of the target site before each pointing movement. When we compared the direction of pointing errors for the second movement to those of the first, we found that errors for each movement varied as a function of current gaze so that pointing endpoints fell on opposite sides of the remembered target site in the same trial. Our results suggest that when shaping multiple pointing movements to the same location the brain does not use information from the previous arm movement such as an arm-fixed representation of the target but instead mainly uses the updated eye-fixed representation of the target to recalculate its location into the appropriate motor frame.  相似文献   

4.
This study addressed the question of how the three-dimensional (3-D) control strategy for the upper arm depends on what the forearm is doing. Subjects were instructed to point a laser-attached in line with the upper arm-toward various visual targets, such that two-dimensional (2-D) pointing directions of the upper arm were held constant across different tasks. For each such task, subjects maintained one of several static upper arm-forearm configurations, i. e., each with a set elbow angle and forearm orientation. Upper arm, forearm, and eye orientations were measured with the use of 3-D search coils. The results confirmed that Donders' law (a behavioral restriction of 3-D orientation vectors to a 2-D "surface") does not hold across all pointing tasks, i.e., for a given pointing target, upper arm torsion varied widely. However, for any one static elbow configuration, torsional variance was considerably reduced and was independent of previous arm position, resulting in a thin, Donders-like surface of orientation vectors. More importantly, the shape of this surface (which describes upper arm torsion as a function of its 2-D pointing direction) depended on both elbow angle and forearm orientation. For pointing with the arm fully extended or with the elbow flexed in the horizontal plane, a Listing's-law-like strategy was observed, minimizing shoulder rotations to and from center at the cost of position-dependent tilts in the forearm. In contrast, when the arm was bent in the vertical plane, the surface of best fit showed a Fick-like twist that increased continuously as a function of static elbow flexion, thereby reducing position-dependent tilts of the forearm with respect to gravity. In each case, the torsional variance from these surfaces remained constant, suggesting that Donders' law was obeyed equally well for each task condition. Further experiments established that these kinematic rules were independent of gaze direction and eye orientation, suggesting that Donders' law of the arm does not coordinate with Listing's law for the eye. These results revive the idea that Donders' law is an important governing principle for the control of arm movements but also suggest that its various forms may only be limited manifestations of a more general set of context-dependent kinematic rules. We propose that these rules are implemented by neural velocity commands arising as a function of initial arm orientation and desired pointing direction, calculated such that the torsional orientation of the upper arm is implicitly coordinated with desired forearm posture.  相似文献   

5.
The principal goal of our study is to gain an insight into the representation of peripersonal space. Two different experiments were conducted in this study. In the first experiment, subjects were asked to represent principal anatomical reference planes by drawing ellipses in the sagittal, frontal and horizontal planes. The three-dimensional hand-drawing movements, which were achieved with and without visual guidance, were considered as the expression of a cognitive process per se: the peripersonal space representation for action. We measured errors in the spatial orientation of ellipses with regard to the requested reference planes. For ellipses drawn without visual guidance, with eyes open and eyes closed, orientation errors were related to the reference planes. Errors were minimal for sagittal and maximal for horizontal plane. These disparities in errors were considerably reduced when subjects drew using a visual guide. These findings imply that different planes are centrally represented, and are characterized, by different errors when subjects use a body-centered frame for performing the movement and suggest that the representation of peripersonal space may be anisotropic. However, this representation can be modified when subjects use an environment-centered reference frame to produce the movement. In the second experiment, subjects were instructed to represent, with eyes open and eyes closed, sagittal, frontal and horizontal planes by pointing to virtual targets located in these planes. Disparities in orientation errors measured for pointing were similar to those found for drawing, implying that the sensorimotor representation of reference planes was not constrained by the type of motor tasks. Moreover, arm postures measured at pointing endpoints and at comparable spatial locations in drawing are strongly correlated. These results suggest that similar patterns of errors and arm posture correlation, for drawing and pointing, can be the consequence of using a common space representation and reference frame. These findings are consistent with the assumption of an anisotropic action-related representation of peripersonal space when the movement is performed in a body-centered frame.  相似文献   

6.
Summary The purpose of this experiment was to determine the preferred coordinate system for representation of hand orientation in 3-dimensional space. The ability of human subjects to perceive angles of the hand in 3-dimensional space (elevation, yaw, roll angles extrinsic coordinate system) was compared to their ability to perceive hand angles relative to the proximal upper limb segments (wrist joint angles, forearm pronation intrinsic coordinate system). With eyes closed, subjects performed a matching task in which the experimenter positioned the left arm, forearm and hand and the right arm and forearm. Subjects were then told to match an angle in one of the two coordinate systems by moving only the right hand at the wrist or the forearm as in pronation or roll matching. Absolute constant error (ACE), variable error (VE) and normalized variable error (NVE-normalized to tested range of motion) of matching were quantified for each subject for each of the six angles matched. It was hypothesized that matching angles in a preferred coordinate system would be associated with lower ACE, VE and NVE. Overall, ACE and VE were lower for matching hand angles in the intrinsic coordinate system. This suggests that the preferred coordinate system involved specification of hand angles relative to forearm and arm angles (joint angles) rather than the hand angles relative to axes external to the upper limb. However, matching of pronation angles was associated with larger VE and NVE than roll angle matching. There were no significant differences in ACE between pronation and roll matching. In a second experiment subjects with their forearms constrained at different elevations matched hand elevation and wrist flexion angles. Thus, errors in matching the angles in the non preferred coordinate system were predictable if the subjects were biased toward matching angles in the preferred coordinate system. Trends in the data suggested that subjects preferred matching hand elevation angles but these trends were not consistent within or between subjects. Thus a preferred intrinsic coordinate system for wrist flexion matching was not observed in this experiment. We suggest that matching angles when proximal limb segments are constrained is a simpler task for the subjects (VE lower than in the first experiment) and may bias the matching toward the extrinsic coordinate system. Thus, hand orientation in 3-dimensional space may be perceived as follows: wrist flexion and abduction angles together with forearm elevation and yaw are used to specify hand elevation and yaw; these together with hand roll angle, completely specify the hand angle in 3-dimensional space.  相似文献   

7.
The preceding study demonstrated that normal subjects compensate for the additional interaction torques generated when a reaching movement is made during voluntary trunk rotation. The present paper assesses the influence of trunk rotation on finger trajectories and on interjoint coordination and determines whether simultaneous turn-and-reach movements are most simply described relative to a trunk-based or an external reference frame. Subjects reached to targets requiring different extents of arm joint and trunk rotation at a natural pace and quickly in normal lighting and in total darkness. We first examined whether the larger interaction torques generated during rapid turn-and-reach movements perturb finger trajectories and interjoint coordination and whether visual feedback plays a role in compensating for these torques. These issues were addressed using generalized Procrustes analysis (GPA), which attempts to overlap a group of configurations (e.g., joint trajectories) through translations and rotations in multi-dimensional space. We first used GPA to identify the mean intrinsic patterns of finger and joint trajectories (i.e., their average shape irrespective of location and orientation variability in the external and joint workspaces) from turn-and-reach movements performed in each experimental condition and then calculated their curvatures. We then quantified the discrepancy between each finger or joint trajectory and the intrinsic pattern both after GPA was applied individually to trajectories from a pair of experimental conditions and after GPA was applied to the same trajectories pooled together. For several subjects, joint trajectories but not finger trajectories were more curved in fast than slow movements. The curvature of both joint and finger trajectories of turn-and-reach movements was relatively unaffected by the vision conditions. Pooling across speed conditions significantly increased the discrepancy between joint but not finger trajectories for most subjects, indicating that subjects used different patterns of interjoint coordination in slow and fast movements while nevertheless preserving the shape of their finger trajectory. Higher movement speeds did not disrupt the arm joint rotations despite the larger interaction torques generated. Rather, subjects used the redundant degrees of freedom of the arm/trunk system to achieve similar finger trajectories with differing joint configurations. We examined finger movement patterns and velocity profiles to determine the frame of reference in which turn-and-reach movements could be most simply described. Finger trajectories of turn-and-reach movements had much larger curvatures and their velocity profiles were less smooth and less bell-like in trunk-based coordinates than in external coordinates. Taken together, these results support the conclusion that turn-and-reach movements are controlled in an external frame of reference.  相似文献   

8.
9.
Movements of different body segments may be combined in different ways to achieve the same motor goal. How this is accomplished by the nervous system was investigated by having subjects make fast pointing movements with the arm in combination with a forward bending of the trunk that was unexpectedly blocked in some trials. Subjects moved their hand above the surface of a table without vision from an initial position near the midline of the chest to remembered targets placed within the reach of the arm in either the ipsi- or contralateral workspace. In experiment 1, subjects were instructed to make fast arm movements to the target without corrections whether or not the trunk was arrested. Only minor changes were found in the hand trajectory and velocity profile in response to the trunk arrest, and these changes were seen only late in the movement. In contrast, the patterns of the interjoint coordination substantially changed in response to the trunk arrest, suggesting the presence of compensatory arm-trunk coordination minimizing the deflections from the hand trajectory regardless of whether the trunk is recruited or mechanically blocked. Changes in the arm interjoint coordination in response to the trunk arrest could be detected kinematically at a minimal latency of 50 ms. This finding suggests a rapid reflex compensatory mechanism driven by vestibular and/or proprioceptive afferent signals. In experiment 2, subjects were required, as soon as they perceived the trunk arrest, to change the hand motion to the same direction as that of the trunk. Under this instruction, subjects were able to initiate corrections only after the hand approached or reached the final position. Thus, centrally mediated compensatory corrections triggered in response to the trunk arrest were likely to occur too late to maintain the observed invariant hand trajectory in experiment 1. In experiment 3, subjects produced similar pointing movements, but to a target that moved together with the trunk. In these body-oriented pointing movements, the hand trajectories from trials in which the trunk was moving or arrested were substantially different. The same trajectories represented in a relative frame of reference moving with the trunk were virtually identical. We conclude that hand trajectory invariance can be produced in an external spatial (experiment 1) or an internal trunk-centered (experiment 3) frame of reference. The invariance in the external frame of reference is accomplished by active compensatory changes in the arm joint angles nullifying the influence of the trunk motion on the hand trajectory. We suggest that to make a transition to the internal frame of reference, control systems suppress this compensation. One of the hypotheses opened to further experimental testing is that the integration of additional (trunk) degrees of freedom into movement is based on afferent (proprioceptive, vestibular) signals stemming from the trunk motion and transmitted to the arm muscles.  相似文献   

10.
Goal-directed movements require mapping of target information to patterns of muscular activation. While visually acquired information about targets is initially encoded in extrinsic, object-centered coordinates, muscular activation patterns are encoded in intrinsic, body-related coordinates. Intermanual transfer of movements previously learned with one hand is accomplished by the recall of unmodified extrinsic coordinates if the task is performed in original orientation. Intrinsic coordinates are retrieved in case of mirror-reversed orientation. In contrast, learned extrinsic coordinates are modified during the mirror movement and intrinsic coordinates during the originally oriented task. To investigate the neural processes of recall and modification, electroencephalogram (EEG) recording was employed during the performance of a figure drawing task previously trained with the right hand in humans. The figure was reproduced with the right hand (Learned-task) and with the left hand in original (Normal-task) and mirror orientations (Mirror-task). Prior to movement onset, beta-power and alpha- and beta-coherence decreased during the Normal-task as compared with the Learned-task. Negative amplitudes over fronto-central sites during the Normal-task exceeded amplitudes manifested during the Learned-task. In comparison to the Learned-task, coherences between fronto-parietal sites increased during the Mirror-task. Results indicate that intrinsic coordinates are processed during the pre-movement period. During the Normal-task, modification of intrinsic coordinates was revealed by cerebral activation. Decreased coherences appeared to reflect suppressed inter-regional information flow associated with utilization of intrinsic coordinates. During the Mirror-task, modification of extrinsic coordinates induced activation of cortical networks.  相似文献   

11.
To produce accurate goal-directed arm movements, subjects must determine the precise location of target object. Position of extracorporeal objects can be determined using: (a) an egocentric frame of reference, in which the target is localized in relation to the position of the body; and/or (b) an allocentric system, in which target position is determined in relation to stable visual landmarks surrounding the target (Bridgeman 1989; Paillard 1991). The present experiment was based on the premise that (a) the presence of a structured visual environment enables the use of an allocentric frame of reference, and (b) the sole presence of a visual target within a homogeneous background forces the registration of the target location by an egocentric system. Normal subjects and a deafferented patient (i.e., with an impaired egocentric system) pointed to visual targets presented in both visual environments to evaluate the efficiency of the two reference systems. For normals, the visual environment conditions did not affect pointing accuracy. However, kinematic parameters were affected by the presence or absence of a structured visual surrounding. For the deafferented patient, the presence of a structured visual environment permitted a decrease in spatial errors when compared with the unstructured surrounding condition (for movements with or without visual feedback of the trajectory). Overall, results support the existence of an egocentric and an allocentric reference system capable of organizing extracorporeal space during arm movements directed toward visual targets.  相似文献   

12.
Path constraints on point-to-point arm movements in three-dimensional space   总被引:2,自引:0,他引:2  
In this paper data are presented concerning the kinematic and dynamic characteristics of point-to-point arm movements which are inwardly or outwardly directed in three-dimensional space. Elbow and wrist position as well as elbow angle of extension were measured. From these data, other angles were computed trigonometrically and elbow and shoulder torques were calculated. Some of the angles describing arm and forearm motion were found to be linearly related for any given movement. Changes in shoulder and elbow torque were found to be similar to those described for movements restricted to one degree of freedom. Shoulder and elbow motions were not affected when it was required that the orientation of the hand in space remain constant. These observations were taken to indicate that shoulder and elbow motions are tightly coupled for movements in three-dimensional space and that wrist motion has no influence on this coupling. Linear relations between angles express such coupling. They are taken to result from functional constraints and may facilitate the mapping between extrinsic and intrinsic coordinate systems. Some of the observations pertaining to the torque lead to the hypothesis of a further constraint limiting the number of possible trajectories in a point-to-point movement.  相似文献   

13.
At some stage in the process of a sensorimotor transformation for a reaching movement, information about the current position of the hand and information about the location of the target must be encoded in the same frame of reference to compute the hand-to-target difference vector. Two main hypotheses have been proposed regarding this reference frame: an eye-centered and a body-centered frame. Here we evaluated these hypotheses using the pointing errors that subjects made when planning and executing arm movements to memorized targets starting from various initial hand positions while keeping gaze fixed in various directions. One group of subjects (n = 10) was tested without visual information about hand position during movement planning (unseen-hand condition); another group (n = 8) was tested with hand and target position simultaneously visible before movement onset (seen-hand condition). We found that both initial hand position and gaze fixation direction had a significant effect on the magnitude and direction of the pointing error. Errors were significantly smaller in the seen-hand condition. For both conditions, though, a reference frame analysis showed that the errors arose at an eye- or hand-centered stage or both, but not at a body-centered stage. As a common reference frame is required to specify a movement vector, these results suggest that an eye-centered mechanism is involved in integrating target and hand position in programming reaching movements. We discuss how simple gain elements modulating the eye-centered target and hand-position signals can account for these results.  相似文献   

14.
The central nervous system uses stereotypical combinations of the three wrist/forearm joint angles to point in a given (2D) direction in space. In this paper, we first confirm and analyze this Donders’ law for the wrist as well as the distributions of the joint angles. We find that the quadratic surfaces fitting the experimental wrist configurations during pointing tasks are characterized by a subject-specific Koenderink shape index and by a bias due to the prono-supination angle distribution. We then introduce a simple postural model using only four parameters to explain these characteristics in a pointing task. The model specifies the redundancy of the pointing task by determining the one-dimensional task-equivalent manifold (TEM), parameterized via wrist torsion. For every pointing direction, the torsion is obtained by the concurrent minimization of an extrinsic cost, which guarantees minimal angle rotations (similar to Listing’s law for eye movements) and of an intrinsic cost, which penalizes wrist configurations away from comfortable postures. This allows simulating the sequence of wrist orientations to point at eight peripheral targets, from a central one, passing through intermediate points. The simulation first shows that in contrast to eye movements, which can be predicted by only considering the extrinsic cost (i.e., Listing’s law), both costs are necessary to account for the wrist/forearm experimental data. Second, fitting the synthetic Donders’ law from the simulated task with a quadratic surface yields similar fitting errors compared to experimental data.  相似文献   

15.
Summary In this study we investigated pointing movements made with an extended arm. Despite the large number of mechanical degrees of freedom, limb orientation adopted during pointing could be described by rotation axes contained on a two-dimensional curved surface. As a result of the curvature, the orientation of a linear plane approximating a small region of the curved surface was dependent on the location of the movements within the full workspace. These results account for earlier suggestions that limb orientation could be described by coplanar rotation vectors and that the orientation of the plane moved with the workspace. Despite the additional complexity, our results indicate that the number of degrees of freedom used to position the extended forearm is reduced from four to two for normal pointing movements. Contributions to orientation of the wrist and hand by supination/pronation of the forearm were minor for changes in shoulder yaw angle. However, supination/ pronation added significantly to orientation of the hand for changes in shoulder pitch angle.  相似文献   

16.
The abilities of human subjects to perform reach and grasp movements to remembered locations/ orientations of a cylindrical object were studied under four conditions: (1) visual presentation of the object — reach with vision allowed; (2) visual presentation — reach while blindfolded; (3) kinesthetic presentation of the object-reach while blindfolded and (4) kinesthetic presentation-reach with vision. The results showed that subjects were very accurate in locating the object in the purely kinesthetic condition and that directional errors were low in all four conditions; but, predictable errors in reach distance occurred in conditions 1,2, and 4. The pattern of these distance errors was similar to that identified in previous research using a pointing task to a small target (i.e., overshoots of close targets, undershoots of far targets). The observation that the pattern of distance errors in condition 4 was similar to that of conditions 1 and 2 suggests that subjects transform kinesthetically defined hand locations into a visual coordinate system when vision is available during upper limb motion to a remembered kinesthetic target. The differences in orientation of the upper limb between target and reach positions in condition 3 were similar in magnitude to the errors associated with kinesthetic perceptions of arm and hand orientations in three-dimensional space reported in previous studies. However, fingertip location was specified with greater accuracy than the orientation of upper limb segments. This was apparently accomplished by compensation of variations in shoulder (arm) angles with oppositely directed variations in elbow joint angles. Subjects were also able to transform visually perceived object orientation into an appropriate hand orientation for grasp, as indicated by the relation between hand roll angle and object orientation (elevation angle). The implications of these results for control of upper limb motion to external targets are discussed.  相似文献   

17.
Accurate performance of reaching movements depends on adaptable neural circuitry that learns to predict forces and compensate for limb dynamics. In earlier experiments, we quantified generalization from training at one arm position to another position. The generalization patterns suggested that neural elements learning to predict forces coded a limb's state in an intrinsic, muscle-like coordinate system. Here, we test the sensitivity of these elements to the other arm by quantifying inter-arm generalization. We considered two possible coordinate systems: an intrinsic (joint) representation should generalize with mirror symmetry reflecting the joint's symmetry and an extrinsic representation should preserve the task's structure in extrinsic coordinates. Both coordinate systems of generalization were compared with a na?ve control group. We tested transfer in right-handed subjects both from dominant to nondominant arm (D-->ND) and vice versa (ND-->D). This led to a 2 x 3 experimental design matrix: transfer direction (D-->ND/ND-->D) by coordinate system (extrinsic, intrinsic, control). Generalization occurred only from dominant to nondominant arm and only in extrinsic coordinates. To assess the dependence of generalization on callosal inter-hemispheric communication, we tested commissurotomy patient JW. JW showed generalization from dominant to nondominant arm in extrinsic coordinates. The results suggest that when the dominant right arm is used in learning dynamics, the information could be represented in the left hemisphere with neural elements tuned to both the right arm and the left arm. In contrast, learning with the nondominant arm seems to rely on the elements in the nondominant hemisphere tuned only to movements of that arm.  相似文献   

18.
We examined the role of gaze in a task where subjects had to reproduce the position of a remembered visual target with the tip of the index finger, referred to as pointing. Subjects were tested in 3 visual feedback conditions: complete darkness (dark), complete darkness with visual feedback of the finger position (finger), and with vision of a well-defined environment and feedback of the finger position (frame). Pointing accuracy increases with feedback about the finger or visual environment. In the finger and frame conditions, the 95% confidence regions of the variable errors have an ellipsoidal distribution with the main axis oriented toward the subjects' head. During the 1-s period when the target is visible, gaze is almost on target. However, gaze drifts away from the target relative to the subject in the delay period after target disappearance. In the finger and frame conditions, gaze returns toward the remembered target during pointing. In all 3 feedback conditions, the correlations between the variable errors of gaze and pointing position increase during the delay period, reaching highly significant values at the time of pointing. Our results demonstrate that gaze affects the accuracy of pointing. We conclude that the covariance between gaze and pointing position reflects a common drive for gaze and arm movements and an effect of gaze on pointing accuracy at the time of pointing. Previous studies interpreted the orientation of variable errors as indicative for a frame of reference used for pointing. Our results suggest that the orientation of the error ellipses toward the head is at least partly the result of gaze drift in the delay period.  相似文献   

19.
20.
J F Soechting  B Ross 《Neuroscience》1984,13(2):595-604
The coordinate representation of the sense of limb orientation was investigated psychophysically by asking subjects to match the orientation of the arm or of the forearm in several different coordinate representations. Movement of all degrees of freedom of one arm was permitted while movement of the other limb was restricted to the degree of freedom investigated in that particular experiment. Performance on the tasks was assessed by calculating the standard deviation of the difference in the angles of the two limbs. According to this criterion, we suggest that limb orientation is represented by the angular elevation of the limb and by the yaw angle, referred to a spatial reference frame.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号