首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The present study compared the contribution of visual information of hand and target position to the online control of goal-directed arm movements. Their respective contributions were assessed by examining how human subjects reacted to a change of the position of either their seen hand or the visual target near the onset of the reaching movement. Subjects, seated head-fixed in a dark room, were instructed to look at and reach with a pointer towards visual targets located in the fronto-parallel plane at different distances to the right of the starting position. LEDs mounted on the tip of the pointer were used to provide true or erroneous visual feedback about hand position. In some trials, either the target or the pointer LED that signalled the actual hand position was shifted 4.5 cm to the left or to the right during the ocular saccade towards the target. Because of saccadic suppression, subjects did not perceive these displacements, which occurred near arm movement onset. The results showed that modifications of arm movement amplitude appeared, on average, 150 ms earlier and reached a greater extent (mean difference=2.7 cm) when there was a change of target position than when a change of the seen hand position occurred. These findings highlight the weight of target position information to the online control of arm movements. Visual information relative to hand position may be less contributive because proprioception also provides information about limb position.  相似文献   

2.
Summary The spatial and temporal organixation of unrestricted limb movements directed to small visual targets was examined in two separate experiments. Videotape records of the subjects' performance allowed us to analyze the trajectory of the limb movement through 3-dimensional space. Horizontal eye movements during reaching were measured by infrared corneal reflection. In both experiments, the trajectories of the different reaches approximated straight line paths and the velocity profile revealed an initial rapid acceleration followed by a prolonged period of deceleration. In Experiment 1, in which the target light was presented to the right or left of a central fixation point at either 10° or 20° eccentricity, the most consistent differences were observed between reaches directed across the body axis to targets presented in the contralateral visual field and reaches directed at ipsilateral targets. Ipsilateral reaches were initiated more quickly, were completed more rapidly, and were more accurate than contralateral reaches. While these findings suggest that hemispherically organized neural systems are involved in the programming of visually guided limb movements, it was not clear whether the inefficiency of the contralateral movements was due to reaching across the body axis or reaching into the visual hemifield contralateral to the hand being used. Therefore, in Experiment 2, the position of the fixation point was varied such that the effects of visual field and body axis could be disembedded. In this experiment, the kinematics of the reaching movement were shown to be independent of the point of visual fixation and varied only as a function of the laterality of the target position relative to the body axis. This finding suggests that the kinematics of a reaching movement are determined by differences in the processing of neural systems associated with motor output, after the target has been localized in space. The effect of target laterality on response latency and accuracy, however, could not be attributed to a single frame of reference, or to a simple additive effect of both. These findings illustrate the complex integration of visual spatial information which must take place in order to reach accurately to goal objects in extrapersonal space. Comparison of ocular and manual performance revealed a close relationship between movement latency for both motor systems. Thus, rightward-going eye movements to a given target were initiated more quickly when accompanied by reaches with the right hand than when they were accompanied by reaches with the left hand. The finding that the latency of eye movements in one direction was influenced by which hand was being used to reach suggests that reaching toward a target under visual control involves a common integration of both eye and hand movements.This study was supported by grant no. MA-7269 from the Medical Research Council of Canada to M. A. Goodale  相似文献   

3.
Neurons in the parietal reach region (PRR) have been implicated in the sensory-to-motor transformation required for reaching toward visually defined targets. The neurons in each cortical hemisphere might be specifically involved in planning movements of just one limb, or the PRR might code reach endpoints generically, independent of which limb will actually move. Previous work has shown that the preferred directions of PRR neurons are similar for right and left limb movements but that the amplitude of modulation may vary greatly. We now test the hypothesis that frames of reference and eye and hand gain field modulations will, like preferred directions, be independent of which hand moves. This was not the case. Many neurons show clear differences in both the frame of reference as well as in direction and strength of gain field modulations, depending on which hand is used to reach. The results suggest that the information that is conveyed from the PRR to areas closer to the motor output (the readout from the PRR) is different for each limb and that individual PRR neurons contribute either to controlling the contralateral-limb or else bimanual-limb control.  相似文献   

4.
Posterior parietal cortex (PPC) has been implicated in the integration of visual and proprioceptive information for the planning of action. We previously reported that single-pulse transcranial magnetic stimulation (TMS) over dorsal–lateral PPC perturbs the early stages of spatial processing for memory-guided reaching. However, our data did not distinguish whether TMS disrupted the reach goal or the internal estimate of initial hand position needed to calculate the reach vector. To test between these hypotheses, we investigated reaching in six healthy humans during left and right parietal TMS while varying visual feedback of the movement. We reasoned that if TMS were disrupting the internal representation of hand position, visual feedback from the hand might still recalibrate this signal. We tested four viewing conditions: 1) final vision of hand position; 2) full vision of hand position; 3) initial and final vision of hand position; and 4) middle and final vision of hand position. During the final vision condition, left parietal stimulation significantly increased endpoint variability, whereas right parietal stimulation produced a significant leftward shift in both visual fields. However, these errors significantly decreased with visual feedback of the hand during both planning and control stages of the reach movement. These new findings demonstrate that 1) visual feedback of hand position during the planning and early execution of the reach can recalibrate the perturbed signal and, importantly, and 2) TMS over dorsal–lateral PPC does not disrupt the internal representation of the visual goal, but rather the reach vector, or more likely the sense of initial hand position that is used to calculate this vector.  相似文献   

5.
Two experiments examined the integration of visual and proprioceptive information concerning the location of an unseen hand, using a mirror positioned along the midsagittal plane. In experiment 1, participants tapped the fingers of both hands in synchrony, while viewing the mirror-reflection of their left hand. After 6 s, participants made reaching movements to a target with their unseen right hand behind the mirror. Reaches were accurate when visually and proprioceptively specified hand positions were congruent prior to the reach, but significantly biased by vision when the visual location conflicted with the real location. This effect was independent of the target location and depended strongly upon the relative position of the mirror-reflected hand. In experiment 2, participants made reaching movements following 4, 8, or 12 s active visuomotor or passive visual exposure to the mirror, or following passive exposure without the mirror. Reaching was biased more by the visual location following active visuomotor compared to passive visual exposure, and this bias increased with the duration of visual exposure. These results suggest that the felt position of the hand depends upon an integrated, weighted sum of visual and proprioceptive information. Visual information is weighted more strongly under active visuomotor than passive visual exposure, and with increasing exposure duration to the mirror reflected hand.  相似文献   

6.
Many studies have shown that reaching movements to visual targets can rapidly adapt to altered visual feedback of hand motion (i.e., visuomotor rotation) and generalize to new target directions. This generalization is thought to reflect the acquisition of a neural representation of the novel visuomotor environment that is localized to the particular trained direction. In these studies, participants perform movements to a small number of target locations repeatedly. However, it is unclear whether adaptation and generalization are comparable when target locations are constantly varied and participants reach to visual targets one time only. Here, we compared performance for reaches to a 30° counter-clockwise visuomotor rotation to four targets, spaced 90° apart across four areas of workspace 18 times each (repeated practice (RP)) with one time only reaching movements to 72 targets, spaced 5° apart (varied practice (VP)). For both training groups, participants performed 18 reaches to radial targets (either at the repeated or varied location) in a specific area of the workspace (i.e., one of four quadrants) before reaching in the adjacent workspace. We found that the RP group adapted more completely compared to the VP group. Conversely, the VP group generalized to new target directions more completely when reaching without cursor feedback compared to the RP group. This suggests that RP and VP follow a mainly common pattern of adaptation and generalization represented in the brain, with benefits of faster adaptation with RP and more complete generalization with VP.  相似文献   

7.
 This research examined the electromyographic (EMG) activity of shoulder and elbow muscles during reaching movements of the upper limb. Subjects performed goal-directed arm movements in the horizontal plane. Movements which varied in amplitude, speed, and direction were performed in different sections of the workspace. EMG activity was recorded from the pectoralis major, posterior deltoid, biceps brachii short head, brachioradialis, triceps brachii long head, and triceps brachii lateral head; motion recordings were obtained with an optoelectric system. The analysis focused on the magnitude and timing of opposing muscle groups at the shoulder and elbow joints. For hand movements within any given direction of the workspace direction, kinematic manipulations changed agonist and antagonist EMG magnitude and intermuscle timing in a manner consistent with previous single-joint findings. To produce reaching movements in different directions and areas of the workspace, shoulder and elbow agonist EMG magnitude increased for those hand motions which required higher angular velocities, while the timing between opposing muscle groups at each joint was invariant. Received: 11 January 1996 / Accepted: 24 February 1997  相似文献   

8.
In order to study prehension in a reproducible manner, we trained monkeys to perform a task in which rectangular, spherical, and cylindrical objects were grasped, lifted, held, and lowered in response to visual cues. The animal’s hand movements were monitored using digital video, together with simultaneously recorded spike trains of neurons in primary somatosensory cortex (S-I) and posterior parietal cortex (PPC). Statistically significant task-related modulation of activity occurred in 78% of neurons tested in the hand area; twice as many cells were facilitated during object acquisition as were depressed. Cortical neurons receiving inputs from tactile receptors in glabrous skin of the fingers and palm, hairy skin of the hand dorsum, or deep receptors in muscles and joints of the hand modulated their firing rates during prehension in consistent and reproducible patterns. Spike trains of individual neurons differed in duration and amplitude of firing, the particular hand behavior(s) monitored, and their sensitivity to the shape of the grasped object. Neurons were classified by statistical analysis into groups whose spike trains were tuned to single task stages, spanned two successive stages, or were multiaction. The classes were not uniformly distributed in specific cytoarchitectonic fields, nor among particular somatosensory modalities. Sequential deformation of parts of the hand as the task progressed was reflected in successive responses of different members of this population. The earliest activity occurred in PPC, where 28% of neurons increased firing prior to hand contact with objects; such neurons may participate in anticipatory motor control programs. Activity shifted rostrally to S-I as the hand contacted the object and manipulated it. The shape of the grasped object had the strongest influence on PPC cells. The results suggest that parietal neurons monitor hand actions during prehension, as well as the physical properties of the grasped object, by shifting activity between populations responsive to hand shaping, grasping, and manipulatory behaviors. Received: 1 October 1998 / Accepted: 4 May 1999  相似文献   

9.
How visual feedback contributes to the on-line control of fast reaching movements is still a matter of considerable debate. Whether feedback is used continuously throughout movements or only in the "slow" end-phases of movements remains an open question. In order to resolve this question, we applied a perturbation technique to measure the influence of visual feedback from the hand at different times during reaching movements. Subjects reached to touch targets in a virtual 3D space, with visual feedback provided by a small virtual sphere that moved with a subject's fingertip. Small random perturbations were applied to the position of the virtual fingertip at two different points in the movement, either at 25% or 50% of the total movement extent. Despite the fact that subjects were unaware of the perturbations, their hand trajectories showed smooth and accurate corrections. Detectable responses were observed within an average of 160 ms after perturbations, and as early as 60% of the distance to the target. Response latencies were constant across different perturbation times and movement speed conditions, suggesting that a fixed sensori-motor delay is the limiting factor. The results provide direct evidence that the human brain uses visual feedback from the hand in a continuous fashion to guide fast reaching movements throughout their extent.  相似文献   

10.
There is considerable evidence that targets for action are represented in a dynamic gaze-centered frame of reference, such that each gaze shift requires an internal updating of the target. Here, we investigated the effect of eye movements on the spatial representation of targets used for position judgements. Participants had their hand passively placed to a location, and then judged whether this location was left or right of a remembered visual or remembered proprioceptive target, while gaze direction was varied. Estimates of position of the remembered targets relative to the unseen position of the hand were assessed with an adaptive psychophysical procedure. These positional judgements significantly varied relative to gaze for both remembered visual and remembered proprioceptive targets. Our results suggest that relative target positions may also be represented in eye-centered coordinates. This implies similar spatial reference frames for action control and space perception when positions are coded relative to the hand.  相似文献   

11.
We previously reported that the kinematics of reaching movements reflect the superimposition of two separate control mechanisms specifying the hand's spatial trajectory and its final equilibrium position. We now asked whether the brain maintains separate representations of the spatial goals for planning hand trajectory and final position. One group of subjects learned a 30 degrees visuomotor rotation about the hand's starting point while performing a movement reversal task ("slicing") in which they reversed direction at one target and terminated movement at another. This task required accuracy in acquiring a target mid-movement. A second group adapted while moving to -- and stabilizing at -- a single target ("reaching"). This task required accuracy in specifying an intended final position. We examined how learning in the two tasks generalized both to movements made from untrained initial positions and to movements directed toward untrained targets. Shifting initial hand position had differential effects on the location of reversals and final positions: Trajectory directions remained unchanged and reversal locations were displaced in slicing whereas final positions of both reaches and slices were relatively unchanged. Generalization across directions in slicing was consistent with a hand-centered representation of desired reversal point as demonstrated previously for this task whereas the distributions of final positions were consistent with an eye-centered representation as found previously in studies of pointing in three-dimensional space. Our findings indicate that the intended trajectory and final position are represented in different coordinate frames, reconciling previous conflicting claims of hand-centered (vectorial) and eye-centered representations in reach planning.  相似文献   

12.
Reaching to grasp is of fundamental importance to primate motor behavior and requires coordinating hand preshaping with limb transport and grasping. We aimed to clarify the role of cerebellar output via the magnocellular red nucleus (RNm) to the control of reaching to grasp. Rubrospinal fibers originating from RNm constitute one pathway by which cerebellar output influences spinal circuitry directly. We recorded discharge from individual forelimb RNm neurons while monkeys performed a reach-to-grasp task and two tasks that were similar to the reach-to-grasp task in trajectory, amplitude, and direction but did not include a grasp. One of these, the device task, elicited reaches while holding a handle, and the other, the free-reach task, elicited reaches that did not require any specific hand use for task performance. The results demonstrate that coordinated whole-limb reaching movements are associated with large discharge modulations of RNm neurons predominantly when hand use is included. Therefore RNm neurons can at best only make a minor contribution to the control of reaching movements that lack hand use. We evaluated relations between the discharge of individual RNm neurons and electromyographic (EMG) activity of forelimb muscles during the reach-to-grasp task by comparing times of peak RNm discharge to times of peak EMG activity. The results are consistent with the view that RNm discharge may contribute to EMG activity of both distal and proximal muscles during reaching to grasp especially digit extensor and limb elevation muscles. Relations between the discharge of individual RNm neurons and movements of the metacarpi-phalangeal (MCP), wrist, elbow, and shoulder joints during individual trials of task performance were quantified by parametric correlation analyses on a subset of neurons studied during the reach-to-grasp and free-reach tasks. The results indicate that MCP extensions were consistently preceded by bursts of RNm discharge, and strong correlations were observed between parameters of discharge and the duration, velocity, and amplitude of corresponding MCP extensions. In contrast, relations between discharge and movements of proximal joints were poorly represented, and RNm discharge was not related to the speed of limb transport. Based on our data and those of others, we hypothesize that cerebellar output via RNm is specialized for controlling hand use and conclude that RNm may contribute to the control of hand preshaping during reaching to grasp by activating muscle synergies that produce the appropriate MCP extension at the appropriate phase of limb transport.  相似文献   

13.
To investigate the nature of the visuomotor transformation, previous studies have used pointing tasks and examined how adaptation to a spatially localized mismatch between vision and proprioception generalizes across the workspace. Whereas some studies found extensive spatial generalization of single-point remapping, consistent with the hypothesis of a global realignment of visual and proprioceptive spaces, other studies reported limited transfer associated with variations in initial limb posture. Here, we investigated the effects of spatially localized remapping in the context of a visuomanual tracking task. Subjects tracked a visual target tracing a simple two-dimensional geometrical form without visual feedback except at a single point, where the visual display of the hand was shifted relative to its actual position. After adaptation, hand paths exhibited distortions relative to the visual templates that were inconsistent with the idea of a global realignment of visual and proprioceptive spaces. Results of a visuoproprioceptive matching task showed that these distortions were not limited to active movements but also affected perception of passive limb movements.  相似文献   

14.
When reaches are performed toward target objects, the presence of other non-target objects influences kinematic parameters of the reach. A typical observation has been that non-targets positioned ipsilaterally to the acting limb interfere more with the trajectory of the hand than contralateral non-targets. Here, we investigate whether this effect is mediated by motor lateralization or by the relative positioning of the objects with reference to the acting limb. Participants were asked to perform reaches toward physical target objects with their preferred or non-preferred hands while physical non-targets were present in different possible positions in the workspace. We tested both left-handers and right-handers. Our results show that a participant’s handedness does not influence reaching behavior in an obstacle avoidance paradigm. Furthermore, no statistically significant differences between the use of the preferred and non-preferred hand were observed on the kinematic parameters of the reaches. We found evidence that non-targets positioned on the outside of the reaching limb influenced the reaching behavior more strongly than non-targets on the inside. Moreover, the type of movement also appeared to play a role, as reaches that crossed the workspace had a stronger effect on avoidance behavior than reaches that were ‘uncrossed.’ We interpret these results as support for the hypothesis that the avoidance response is determined by keeping a preferred distance between the acting limb in all stages of its reach toward the target and the non-target position. This process is not biased by hand dominance or the hand preference of the actor.  相似文献   

15.
Human arm movements towards visual targets are remarkably reproducible in several tasks and conditions. Various authors have reported that trajectories of unconstrained point-to-point movements are slightly curved, smooth and have bell-shaped velocity profiles. The hand paths of such movements show small - but significant – curvatures throughout the workspace. The cause of these curvatures is still obscure. Traditionally this curvature is explained as the result of an optimisation process or is ascribed to mechanical or dynamic properties of the effector system. Recently, however, it has been suggested that these curvatures are due at least partly, to the visual misperception of straight lines. To evaluate the latter hypothesis, we compared unconstrained, self-paced point-to-point movements that subjects made with their right and left hand. We assume that the visual misperception may depend on the position in the workspace, subject, etc. but not on the hand used to make the movement. Therefore we argue that if curvature is caused by a visual misperception of straight lines, curvatures should be the same for movements made with the left and right hand. Our experiments cast strong doubt on the hypothesis that curvatures are the result of a visual distortion, because curvatures of the left hand trajectories, mirrored in the mid-sagittal plane, are found to be accurately described by trajectories of the right hand. Estimates of the effect of visual distortion on movement curvature show that, if present, this effect is very small compared with other sources that contribute to movement curvature. We found that curvatures depend strongly on the subject and on the direction and distance of the movement. Curvatures do not seem to be caused purely by the dynamic properties of the arm, since curvatures do not change significantly with increasing movement velocity. Therefore, we conclude that curvatures reflect an inherent property of the control of multi-joint arm movements. Reveived: 29 October 1996 / Accepted: 1 October 1997  相似文献   

16.
Goal-directed movements are executed under the permanent supervision of the central nervous system, which continuously processes sensory afferents and triggers on-line corrections if movement accuracy seems to be compromised. For arm reaching movements, visual information about the hand plays an important role in this supervision, notably improving reaching accuracy. Here, we tested whether visual feedback of the hand affects the latency of on-line responses to an external perturbation when reaching for a visual target. Two types of perturbation were used: visual perturbation consisted in changing the spatial location of the target and kinesthetic perturbation in applying a force step to the reaching arm. For both types of perturbation, the hand trajectory and the electromyographic (EMG) activity of shoulder muscles were analysed to assess whether visual feedback of the hand speeds up on-line corrections. Without visual feedback of the hand, on-line responses to visual perturbation exhibited the longest latency. This latency was reduced by about 10% when visual feedback of the hand was provided. On the other hand, the latency of on-line responses to kinesthetic perturbation was independent of the availability of visual feedback of the hand. In a control experiment, we tested the effect of visual feedback of the hand on visual and kinesthetic two-choice reaction times – for which coordinate transformation is not critical. Two-choice reaction times were never facilitated by visual feedback of the hand. Taken together, our results suggest that visual feedback of the hand speeds up on-line corrections when the position of the visual target with respect to the body must be re-computed during movement execution. This facilitation probably results from the possibility to map hand- and target-related information in a common visual reference frame.  相似文献   

17.
Summary We studied the reaction times and initial directions of hand movements and saccades of human subjects who fixated and pointed as quickly as possible at eccentric targets which were presented unexpectedly. The targets were positioned on a horizontal bar which was placed in front of the subject. Different stimulus conditions were used in the experiments. Knowledge of the target position or the presence of an auditory co-stimulus slightly affected the reaction times of saccades in response to visual stimuli. Auditory co-stimuli reduced the reaction times considerably when the targets were presented after a delay of 200 ms after extinction of the central fixation point. Similar reductions were observed in the reaction times of the hand movements. However, these reductions were seen in hand responses to undelayed as well as delayed target presentations. The saccades were always made in the correct direction when the target was presented without delay. When the target was delayed about 50% of the saccades were made in the wrong direction. Even for undelayed targets the hand sometimes made mistakes. The number of mistakes increased to 35% when the target presentation was accompanied by the sound pulse. For delayed targets the proportion of wrong hand movements was about 50%. For such targets saccades and hand movements were practically always made in the same direction. If visual information is available, saccades and hand movements are generated independently of each other. However, if visual information is not present at the appropriate time and the target position has to be guessed, saccades and hand movements are generated on the basis of shared information. We suggest that saccades can be generated by two different mechanisms. One mechanism uses only visual information while the other one uses visual as well as cognitive information. The first mechanism is exclusively used for the generation of saccades while the second one has a more general purpose and is used for the generation of saccades as well as hand movements.  相似文献   

18.
Eye–hand coordination is a crucial element of goal-directed movements. However, few studies have looked at the extent to which unconstrained movements of the eyes and hand made to targets influence each other. We studied human participants who moved either their eyes or both their eyes and hand to one of three static or flashed targets presented in 3D space. The eyes were directed, and hand was located at a common start position on either the right or left side of the body. We found that the velocity and scatter of memory-guided saccades (flashed targets) differed significantly when produced in combination with a reaching movement than when produced alone. Specifically, when accompanied by a reach, peak saccadic velocities were lower than when the eye moved alone. Peak saccade velocities, as well as latencies, were also highly correlated with those for reaching movements, especially for the briefly flashed targets compared to the continuous visible target. The scatter of saccade endpoints was greater when the saccades were produced with the reaching movement than when produced without, and the size of the scatter for both saccades and reaches was weakly correlated. These findings suggest that the saccades and reaches made to 3D targets are weakly to moderately coupled both temporally and spatially and that this is partly the result of the arm movement influencing the eye movement. Taken together, this study provides further evidence that the oculomotor and arm motor systems interact above and beyond any common target representations shared by the two motor systems.  相似文献   

19.
The parietal mechanisms of eye-hand coordination during reaching were studied by recording neural activity in area PEc while monkeys performed different tasks, aimed at assessing the influence of retinal, hand-, and eye-related signals on neural activity. The tasks used consisted of 1) reaching to foveated and 2) to extra-foveal targets, with constant eye position; and 3) saccadic eye movement toward, and holding of eye position on peripheral targets, the same as those of the reaching tasks. In all tasks, hand and/or eye movements were made from a central position to eight peripheral targets. A conventional visual fixation paradigm was used as a control task, to assess location and extent of visual receptive field of neurons. A large proportion of cells in area PEc displayed significant relationships to hand movement direction and position. Many of them were also related to the eye's position. Relationships to saccadic eye movements were found for a smaller proportion of cells. Most neurons were tuned to different combination of hand- and eye-related signals; some of them were also influenced by visual information. This combination of signals can be an expression of the early stages of the composition of motor commands for different forms of visuomotor coordination that depend on the integration of hand- and eye-related information. These results assign to area PEc, classically considered as a somatosensory association cortex, a new visuomotor role.  相似文献   

20.
The exact role of posterior parietal cortex (PPC) in visually directed reaching is unknown. We propose that, by building an internal representation of instantaneous hand location, PPC computes a dynamic motor error used by motor centers to correct the ongoing trajectory. With unseen right hands, five subjects pointed to visual targets that either remained stationary or moved during saccadic eye movements. Transcranial magnetic stimulation (TMS) was applied over the left PPC during target presentation. Stimulation disrupted path corrections that normally occur in response to target jumps, but had no effect on those directed at stationary targets. Furthermore, left-hand movement corrections were not blocked, ruling out visual or oculomotor effects of stimulation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号