首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Saccadic eye movements made to remembered locations in the dark show a distinct up-shift in macaque monkey, and slight upward bias in humans (Gnadt et al. 1991). This upward bias created in the visual spatial mapping of a saccade may be translated downstream in a hand/touch movement. This error could possibly reveal (a) information about the frames of reference used in each scenario and (b) the sources of this error within the brain. This would suggest an early planning stage if they are shared, or a later stage if the errors are distinct. Methods: Eight human subjects performed touch responses to a touch screen monitor to both visual and remembered target locations. The subjects used a high-resolution touch-screen monitor, a bite bar and chin-rest for restricting head movements during responses. All target locations were 20° vectors from the central starting position in horizontal, vertical and oblique planes of motion. Results: Subjects were accurate to both visual and remembered target locations with little variance. Subject means showed no significant differences between control and memory trials; however, a distinct asymmetry was observed between cardinal and oblique planes during memory trials. Subjects consistently made errors to oblique locations during touches made to the remembered location that was not evident in control conditions. This error pattern revealed a strong hypermetric tendency for oblique planes of touches made to a remembered location.  相似文献   

2.
In previous studies we observed a pattern of systematic directional errors when humans pointed to memorized visual target locations in two-dimensional (2-D) space. This directional error was also observed in the initial direction of slow movements toward visual targets or movements to kinesthetically defined targets in 2-D space. In this study we used a perceptual experiment where subjects decide whether an arrow points in the direction of a visual target in 2-D space and observed a systematic distortion in direction discrimination known as the "oblique effect." More specifically, direction discrimination was better for cardinal directions than for oblique. We then used an equivalent measure of direction discrimination in a task where subjects pointed to memorized visual target locations and showed the presence of a motor oblique effect. We finally modeled the oblique effect in the perceptual and motor task using a quadratic function. The model successfully predicted the observed direction discrimination differences in both tasks and, furthermore, the parameter of the model that was related to the shape of the function was not different between the motor and the perceptual tasks. We conclude that a similarly distorted representation of target direction is present for memorized pointing movements and perceptual direction discrimination.  相似文献   

3.
The aim of this study was to further understand how the brain represents spatial information for shaping aiming movements to targets. Both behavioral and neurophysiological studies have shown that the brain represents spatial memory for reaching targets in an eye-fixed frame. To date, these studies have only shown how the brain stores and updates target locations for generating a single arm movement. But once a target's location has been computed relative to the hand to program a pointing movement, is that information reused for subsequent movements to the same location? Or is the remembered target location reconverted from eye to motor coordinates each time a pointing movement is made? To test between these two possibilities, we had subjects point twice to the remembered location of a previously foveated target after shifting their gaze to the opposite side of the target site before each pointing movement. When we compared the direction of pointing errors for the second movement to those of the first, we found that errors for each movement varied as a function of current gaze so that pointing endpoints fell on opposite sides of the remembered target site in the same trial. Our results suggest that when shaping multiple pointing movements to the same location the brain does not use information from the previous arm movement such as an arm-fixed representation of the target but instead mainly uses the updated eye-fixed representation of the target to recalculate its location into the appropriate motor frame.  相似文献   

4.
5.
 Invariant patterns in the distribution of the endpoints of reaching movements have been used to suggest that two important movement parameters of reaching movements, direction and extent, are planned by two independent processing channels. This study examined this hypothesis by testing the effect of task conditions on variable errors of direction and extent of reaching movements. Subjects made reaching movements to 25 target locations in a horizontal workspace, in two main task conditions. In task 1, subjects looked directly at the target location on the horizontal workspace before closing their eyes and pointing to it. In task 2, arm movements were made to the same target locations in the same horizontal workspace, but target location was displayed on a vertical screen in front of the subjects. For both tasks, variable errors of movement extent (on-axis error) were greater than for movement direction (off-axis error). As a result, the spatial distributions of endpoints about a given target usually formed an ellipse, with the principal axis oriented in the mean movement direction. Also, both on- and off-axis errors increased with movement amplitude. However, the magnitude of errors, especially on-axis errors, scaled differently with movement amplitude in the two task conditions. This suggests that variable errors of direction and extent can be modified independently by changing the nature of the sensorimotor transformations required to plan the movements. This finding is further evidence that the direction and extent of reaching movements appear to be controlled independently by the motor system. Received: 8 October 1996 / Accepted: 14 January 1997  相似文献   

6.
In previous studies, we provided evidence for a directional distortion of the endpoints of movements to memorized target locations. This distortion was similar to a perceptual distortion in direction discrimination known as the oblique effect so we named it the “motor oblique effect”. In this report we analyzed the directional errors during the evolution of the movement trajectory in memory guided and visually guided pointing movements and compared them with directional errors in a perceptual experiment of arrow pointing. We observed that the motor oblique effect was present in the evolving trajectory of both memory and visually guided reaching movements. In memory guided pointing the motor oblique effect did not disappear during trajectory evolution while in visually guided pointing the motor oblique effect disappeared with decreasing distance from the target and was smaller in magnitude compared to the perceptual oblique effect and the memory motor oblique effect early on after movement initiation. The motor oblique effect in visually guided pointing increased when reaction time was small and disappeared with larger reaction times. The results are best explained using the hypothesis that a low level oblique effect is present for visually guided pointing movements and this effect is corrected by a mechanism that does not depend on visual feedback from the trajectory evolution and might even be completed during movement planning. A second cognitive oblique effect is added in the perceptual estimation of direction and affects the memory guided pointing movements. It is finally argued that the motor oblique effect can be a useful probe for the study of perception–action interaction.  相似文献   

7.
When reaching to remembered target locations following an intervening eye movement a systematic pattern of error is found indicating eye-centred updating of visuospatial memory. Here we investigated if implicit targets, defined only by allocentric visual cues, are also updated in an eye-centred reference frame as explicit targets are. Participants viewed vertical bars separated by varying distances, and horizontal lines of equivalently varying lengths, implying a “target” location at the midpoint of the stimulus. After determining the implied “target” location from only the allocentric stimuli provided, participants saccaded to an eccentric location, and reached to the remembered “target” location. Irrespective of the type of stimulus reaching errors to these implicit targets are gaze-dependent, and do not differ from those found when reaching to remembered explicit targets. Implicit target locations are coded and updated as a function of relative gaze direction with respect to those implied locations just as explicit targets are, even though no target is specifically represented.  相似文献   

8.
Subjects who are in an enclosed chamber rotating at constant velocity feel physically stationary but make errors when pointing to targets. Reaching paths and endpoints are deviated in the direction of the transient inertial Coriolis forces generated by their arm movements. By contrast, reaching movements made during natural, voluntary torso rotation seem to be accurate, and subjects are unaware of the Coriolis forces generated by their movements. This pattern suggests that the motor plan for reaching movements uses a representation of body motion to prepare compensations for impending self-generated accelerative loads on the arm. If so, stationary subjects who are experiencing illusory self-rotation should make reaching errors when pointing to a target. These errors should be in the direction opposite the Coriolis accelerations their arm movements would generate if they were actually rotating. To determine whether such compensations exist, we had subjects in four experiments make visually open-loop reaches to targets while they were experiencing compelling illusory self-rotation and displacement induced by rotation of a complex, natural visual scene. The paths and endpoints of their initial reaching movements were significantly displaced leftward during counterclockwise illusory rotary displacement and rightward during clockwise illusory self-displacement. Subjects reached in a curvilinear path to the wrong place. These reaching errors were opposite in direction to the Coriolis forces that would have been generated by their arm movements during actual torso rotation. The magnitude of path curvature and endpoint errors increased as the speed of illusory self-rotation increased. In successive reaches, movement paths became straighter and endpoints more accurate despite the absence of visual error feedback or tactile feedback about target location. When subjects were again presented a stationary scene, their initial reaches were indistinguishable from pre-exposure baseline, indicating a total absence of aftereffects. These experiments demonstrate that the nervous system automatically compensates in a context-specific fashion for the Coriolis forces associated with reaching movements.  相似文献   

9.
Forty-seven normal subjects performed two-dimensional arm movements on a digitizer board using a mouse device. The movements were projected on a computer monitor. Subjects were instructed to move the mouse using the whole arm from a center position to a peripheral target so that the projected movement would pass over the target without stopping on the target. A large number of targets (360) were used to cover the entire directional continuum. The direction of the arm movement was the parameter of interest, which was measured at an initial position, at one third of the distance towards the target, and at the vicinity of the target. Four conditions of delay between target presentation and movement execution were used (0, 2, 4, 6 s). A systematic directional error was observed at the initial portion of the trajectory. This error resulted from a clustering of movement directions on an axis that was perpendicular to the axis of the resting forearm before movement onset. This pattern of errors can be explained by the initial inertial anisotropy of the arm. As the trajectory evolved, a different directional error emerged, resulting from a clustering of movement directions in two orthogonal axes. This pattern of directional error increased in amplitude as the delay increased, in contrast to the error at the initial portion of the trajectory which remained invariant with increasing delay. Finally, the information transmitted by the movement direction was shown to increase with the evolution of the trajectory. The increase in delay resulted in a decrease in directional-information transmission. It is proposed that the directional bias towards the end of the movement trajectory might reflect the action of "movement primitives", that is patterns of muscle activation resulting from spinal interneuronal activation. It is further proposed that the directional bias observed at the vicinity of the target might reflect a loss of cortical directional information with increasing delay between target presentation and movement onset.  相似文献   

10.
At some stage in the process of a sensorimotor transformation for a reaching movement, information about the current position of the hand and information about the location of the target must be encoded in the same frame of reference to compute the hand-to-target difference vector. Two main hypotheses have been proposed regarding this reference frame: an eye-centered and a body-centered frame. Here we evaluated these hypotheses using the pointing errors that subjects made when planning and executing arm movements to memorized targets starting from various initial hand positions while keeping gaze fixed in various directions. One group of subjects (n = 10) was tested without visual information about hand position during movement planning (unseen-hand condition); another group (n = 8) was tested with hand and target position simultaneously visible before movement onset (seen-hand condition). We found that both initial hand position and gaze fixation direction had a significant effect on the magnitude and direction of the pointing error. Errors were significantly smaller in the seen-hand condition. For both conditions, though, a reference frame analysis showed that the errors arose at an eye- or hand-centered stage or both, but not at a body-centered stage. As a common reference frame is required to specify a movement vector, these results suggest that an eye-centered mechanism is involved in integrating target and hand position in programming reaching movements. We discuss how simple gain elements modulating the eye-centered target and hand-position signals can account for these results.  相似文献   

11.
The directional accuracy of pointing arm movements to remembered targets in conditions of increasing memory load was investigated using a modified version of the Sternbergs context-recall memory-scanning task. Series of 2, 3 or 4 targets (chosen randomly from a set of 16 targets around a central starting point in 2D space) were presented sequentially, followed by a cue target randomly selected from the series excluding the last one. The subject had to move to the location of the next target in the series. Correct movements were those that ended closer to the instructed target than any other target in the series while all other movements were considered as serial order errors. Increasing memory load resulted in a large decrease in the directional accuracy or equivalently in the directional information transmitted by the motor system. The constant directional error varied with target direction in a systematic fashion reproducing previous results and suggesting the same systematic distortion of the representation of direction in different memory delay tasks. The constant directional error was not altered by increasing memory load, contradicting our hypothesis that it might reflect a cognitive strategy for better remembering spatial locations in conditions of increasing uncertainty. Increasing memory load resulted in a linear increase of mean response time and variable directional error and a non-linear increase in the percentage of serial order errors. Also the percentage of serial order errors for the last presented target in the series was smaller (recency effect). The difference between serial order and directional spatial accuracy is supported by neurophysiological and functional anatomical evidence of working memory subsystems in the prefrontal cortex.This work was supported by internal funding from Aeginition University Hospital  相似文献   

12.
13.
14.
We investigated the accuracy with which, in the absence of vision, one can reach again a 2D target location that had been previously identified by a guided movement. A robotic arm guided the participants hand to a target (locating motion) and away from it (homing motion). Then, the participant pointed freely toward the remembered target position. Two experiments manipulated separately the kinematics of the locating and homing motions. Some robot motions followed a straight path with the bell-shaped velocity profile that is typical of natural movements. Other motions followed curved paths, or had strong acceleration and deceleration peaks. Current motor theories of perception suggest that pointing should be more accurate when the homing and locating motion mimics natural movements. This expectation was not borne out by the results, because amplitude and direction errors were almost independent of the kinematics of the locating and homing phases. In both experiments, participants tended to overshoot the target positions along the lateral directions. In addition, pointing movements towards oblique targets were attracted by the closest diagonal (oblique effect). This error pattern was robust not only with respect to the manner in which participants located the target position (perceptual equivalence), but also with respect to the manner in which they executed the pointing movements (motor equivalence). Because of the similarity of the results with those of previous studies on visual pointing, it is argued that the observed error pattern is basically determined by the idiosyncratic properties of the mechanisms whereby space is represented internally.  相似文献   

15.
We tested between two coding mechanisms that the brain may use to retain distance information about a target for a reaching movement across vergence eye movements. If the brain was to encode a retinal disparity representation (retinal model), i.e., target depth relative to the plane of fixation, each vergence eye movement would require an active update of this representation to preserve depth constancy. Alternatively, if the brain was to store an egocentric distance representation of the target by integrating retinal disparity and vergence signals at the moment of target presentation, this representation should remain stable across subsequent vergence shifts (nonretinal model). We tested between these schemes by measuring errors of human reaching movements (n = 14 subjects) to remembered targets, briefly presented before a vergence eye movement. For comparison, we also tested their directional accuracy across version eye movements. With intervening vergence shifts, the memory-guided reaches showed an error pattern that was based on the new eye position and on the depth of the remembered target relative to that position. This suggests that target depth is recomputed after the gaze shift, supporting the retinal model. Our results also confirm earlier literature showing retinal updating of target direction. Furthermore, regression analyses revealed updating gains close to one for both target depth and direction, suggesting that the errors arise after the updating stage during the subsequent reference frame transformations that are involved in reaching.  相似文献   

16.
This study investigates coordinative constraints when participants execute discrete bimanual tool use actions. Participants moved two levers to targets that were either presented near the proximal parts of the levers or near the distal tips of the levers. In the first case, the tool transformation (i.e. the relationship between hand movement direction and target direction) was compatible, whereas in the second case, it was incompatible. We hypothesized that an egocentric constraint (i.e. a preference for moving the hands and tools in a mirror-symmetrical fashion) would be dominant when targets are presented near the proximal parts of the levers because in this situation, movements can be coded in terms of body-related coordinates. Furthermore, an allocentric constraint (i.e. a preference to move the hands in the same (parallel) direction in extrinsic space) was expected to be dominant when one of the targets or both are presented near the distal parts of the levers because in this condition, movements have to be coded in an external reference frame. The results show that when both targets are presented near the proximal parts of the levers, participants are faster and produce less errors with mirror-symmetrical when compared to parallel movements. Furthermore, the RT mirror-symmetry advantage is eliminated, when both targets are presented near the distal parts of the levers, and it is reversed, when the target for one lever is presented near its distal part and the target for the other lever is presented near its proximal part. These results show that the dominance of egocentric and allocentric coordinative constraints in bimanual tool use depends on whether movements are coded in terms of body-related coordinates or in an external reference frame.  相似文献   

17.
To investigate how the sensorimotor systems of eye and hand use position, velocity, and timing information of moving targets, we conducted a series of three experiments. Subjects performed combined eye-hand catch-up movements toward visual targets that moved with step-ramp-like velocity profiles. Visual feedback of the hand was prevented by blanking the target at the onset of the hand movement. A multiple regression was used to determine the effects of position, velocity, and timing accessed before each movement on the movement amplitudes of eye and hand. The following results were obtained: 1.The predictive strategy of eye movements could be modeled by a linear regression on the basis of the position error and the target velocity. This was not the case for hand movements, for which there was a significant partial correlation between the movement amplitude and the product of target velocity and movement duration. This correlation was not observed for eye movements suggesting that the predictive strategy of hand movements takes movement duration into account, in contrast to the strategy used in eye movements. 2.To determine whether the movement amplitudes of eye and hand depend on a categorical classification between a discrete number of movement types, we compared an experiment in which target position and velocity were distributed continuously with an experiment using only four different combinations of target position and velocity. No systematic differences between these experiments were observed. This shows that the system output is a function of continuous, interval-scaled variables rather than a function of discrete categorical variables. 3.We also analyzed the component of the movement amplitudes not explained by the regression, i.e., the residual error. The residual errors between subsequent trials were correlated more strongly for eye than for hand movements, suggesting that short-term temporal fluctuations of the predictive strategy were stronger for the eye than for the hand.  相似文献   

18.
The principal goal of our study is to gain an insight into the representation of peripersonal space. Two different experiments were conducted in this study. In the first experiment, subjects were asked to represent principal anatomical reference planes by drawing ellipses in the sagittal, frontal and horizontal planes. The three-dimensional hand-drawing movements, which were achieved with and without visual guidance, were considered as the expression of a cognitive process per se: the peripersonal space representation for action. We measured errors in the spatial orientation of ellipses with regard to the requested reference planes. For ellipses drawn without visual guidance, with eyes open and eyes closed, orientation errors were related to the reference planes. Errors were minimal for sagittal and maximal for horizontal plane. These disparities in errors were considerably reduced when subjects drew using a visual guide. These findings imply that different planes are centrally represented, and are characterized, by different errors when subjects use a body-centered frame for performing the movement and suggest that the representation of peripersonal space may be anisotropic. However, this representation can be modified when subjects use an environment-centered reference frame to produce the movement. In the second experiment, subjects were instructed to represent, with eyes open and eyes closed, sagittal, frontal and horizontal planes by pointing to virtual targets located in these planes. Disparities in orientation errors measured for pointing were similar to those found for drawing, implying that the sensorimotor representation of reference planes was not constrained by the type of motor tasks. Moreover, arm postures measured at pointing endpoints and at comparable spatial locations in drawing are strongly correlated. These results suggest that similar patterns of errors and arm posture correlation, for drawing and pointing, can be the consequence of using a common space representation and reference frame. These findings are consistent with the assumption of an anisotropic action-related representation of peripersonal space when the movement is performed in a body-centered frame.  相似文献   

19.
Errors in pointing are due to approximations in sensorimotor transformations   总被引:13,自引:0,他引:13  
1. We define an extrinsic frame of reference to represent the location of a point in extrapersonal space relative to a human subject's shoulder, and we define an intrinsic frame of reference to represent the orientation of the arm and forearm. 2. We examined the relations between coordinates in the extrinsic and intrinsic frames of reference under two experimental conditions: when subjects made inaccurate movements by pointing to virtual targets in the dark and when they made accurate movements by pointing to actual targets in the light. 3. When subjects made inaccurate movements, there was a close-to-linear relationship between the orientation angles of the arm (intrinsic coordinates) at its final position and the extrinsic coordinates of the target. When they made accurate movements, these relationships were more nonlinear. 4. Specifically, arm and forearm elevations depended principally on target distance and elevation, whereas the two yaw angles depended mainly on the target's azimuth. 5. We propose that errors in pointing occur because subjects implement a linear approximation to the transformation from extrinsic to intrinsic coordinates and that this transformation is one step in the process of transforming a visually derived representation of target location into an appropriate pattern of muscle activity.  相似文献   

20.
Previous research has shown that reach endpoints vary with the starting position of the reaching hand and the location of the reach target in space. We examined the effect of movement direction of a proprioceptive target-hand, immediately preceding a reach, on reach endpoints to that target. Participants reached to visual, proprioceptive (left target-hand), or visual-proprioceptive targets (left target-hand illuminated for 1 s prior to reach onset) with their right hand. Six sites served as starting and final target locations (35 target movement directions in total). Reach endpoints do not vary with the movement direction of the proprioceptive target, but instead appear to be anchored to some other reference (e.g., body). We also compared reach endpoints across the single and dual modality conditions. Overall, the pattern of reaches for visual-proprioceptive targets resembled those for proprioceptive targets, while reach precision resembled those for the visual targets. We did not, however, find evidence for integration of vision and proprioception based on a maximum-likelihood estimator in these tasks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号