首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 401 毫秒
1.
We examined the role of gaze in a task where subjects had to reproduce the position of a remembered visual target with the tip of the index finger, referred to as pointing. Subjects were tested in 3 visual feedback conditions: complete darkness (dark), complete darkness with visual feedback of the finger position (finger), and with vision of a well-defined environment and feedback of the finger position (frame). Pointing accuracy increases with feedback about the finger or visual environment. In the finger and frame conditions, the 95% confidence regions of the variable errors have an ellipsoidal distribution with the main axis oriented toward the subjects' head. During the 1-s period when the target is visible, gaze is almost on target. However, gaze drifts away from the target relative to the subject in the delay period after target disappearance. In the finger and frame conditions, gaze returns toward the remembered target during pointing. In all 3 feedback conditions, the correlations between the variable errors of gaze and pointing position increase during the delay period, reaching highly significant values at the time of pointing. Our results demonstrate that gaze affects the accuracy of pointing. We conclude that the covariance between gaze and pointing position reflects a common drive for gaze and arm movements and an effect of gaze on pointing accuracy at the time of pointing. Previous studies interpreted the orientation of variable errors as indicative for a frame of reference used for pointing. Our results suggest that the orientation of the error ellipses toward the head is at least partly the result of gaze drift in the delay period.  相似文献   

2.
Accurate information about gaze direction is required to direct the hand towards visual objects in the environment. In the present experiments, we tested whether retinal inputs affect the accuracy with which healthy subjects indicate their gaze direction with the unseen index finger after voluntary saccadic eye movements. In experiment 1, subjects produced a series of back and forth saccades (about eight) of self-selected magnitudes before positioning the eyes in a self-chosen direction to the right. The saccades were produced while facing one of four possible visual scenes: (1) complete darkness, (2) a scene composed of a single light-emitting diode (LED) located at 18 degrees to the right, (3) a visually enriched scene made up of three LEDs located at 0 degrees, 18 degrees and 36 degrees to the right, or (4) a normally illuminated scene where the lights in the experimental room were turned on. Subjects were then asked to indicate their gaze direction with their unseen index finger. In the conditions where the visual scenes were composed of LEDs, subjects were instructed to foveate or not foveate one of the LEDs with their last saccade. It was therefore possible to compare subjects' accuracy when pointing in the direction of their gaze in conditions with and without foveal stimulation. The results showed that the accuracy of the pointing movements decreased when subjects produced their saccades in a dark environment or in the presence of a single LED compared to when the saccades were generated in richer visual environments. Visual stimulation of the fovea did not increase subjects' accuracy when pointing in the direction of their gaze compared to conditions where there was only stimulation of the peripheral retina. Experiment 2 tested how the retinal signals could contribute to the coding of eye position after saccadic eye movements. More specifically, we tested whether the shift in the retinal image of the environment during the saccades provided information about the reached position of the eyes. Subjects produced their series of saccades while facing a visual environment made up of three LEDs. In some trials, the whole visual scene was displaced either 4.5 degrees to the left or 3 degrees to the right during the primary saccade. These displacements created mismatches between the shift of the retinal image of the environment and the extent of gaze deviation. The displacements of the visual scene were not perceived by the subjects because they occurred near the peak velocity of the saccade (saccadic suppression phenomenon). Pointing accuracy was not affected by the unperceived shifts of the visual scene. The results of these experiments suggest that the arm motor system receives more precise information about gaze direction when there is retinal stimulation than when there is none. They also suggest that the most relevant factor in defining gaze direction is not the retinal locus of the visual stimulation (that is peripheral or foveal) but rather the amount of visual information. Finally, the results suggest an enhanced egocentric encoding of gaze direction by the retinal inputs and do not support a retinotopic model for encoding gaze direction.  相似文献   

3.
Hay L  Redon C 《Neuroscience letters》2006,408(3):194-198
Pointing movements decrease in accuracy when target information is removed before movement onset. This time effect was analyzed in relation with the spatial representation of the target location, which can be egocentric (i.e. in relation to the body) or exocentric (i.e. in relation to the external world) depending on the visual environment of the target. The accuracy of pointing movements performed without visual feedback was measured in two delay conditions: 0 and 5-s delay between target removal and movement onset. In each delay condition, targets were presented either in the darkness (egocentric localization) or within a structured visual background (exocentric localization). The results show that pointing was more accurate when targets were presented within a visual background than in the darkness. The time-related decrease in accuracy was observed in the darkness condition, whereas no delay effect was found in the presence of a visual background. Therefore, contextual factors applied to a simple pointing action might induce different spatial representations: a short-lived sensorimotor egocentric representation used in immediate action control, or a long-lived perceptual exocentric representation which drives perception and delayed action.  相似文献   

4.
The purposes of this study were to determine whether gaze direction provides a control signal for movement direction for a pointing task requiring a step and to gain insight into why discrepancies previously identified in the literature for endpoint accuracy with gaze directed eccentrically exist. Straight arm pointing movements were performed to real and remembered target locations, either toward or 30° eccentric to gaze direction. Pointing occurred in normal room lighting or darkness while subjects sat, stood still or side-stepped left or right. Trunk rotation contributed 22–65% to gaze orientations when it was not constrained. Error differences for different target locations explained discrepancies among previous experiments. Variable pointing errors were influenced by gaze direction, while mean systematic pointing errors and trunk orientations were influenced by step direction. These data support the use of a control strategy that relies on gaze direction and equilibrium inputs for whole-body goal-directed movements.  相似文献   

5.
People naturally direct their gaze to visible hand movement goals. Doing so improves reach accuracy through use of signals related to gaze position and visual feedback of the hand. Here, we studied where people naturally look when acting on remembered target locations. Four targets were presented on a screen, in peripheral vision, while participants fixed a central cross (encoding phase). Four seconds later, participants used a pen to mark the remembered locations while free to look wherever they wished (recall phase). Visual references, including the screen and the cross, were present throughout. During recall, participants neither looked at the marked locations nor prevented eye movements. Instead, gaze behavior was erratic and was comprised of gaze shifts loosely coupled in time and space with hand movements. To examine whether eye and hand movements during encoding affected gaze behavior during recall, in additional encoding conditions, participants marked the visible targets with either free gaze or with central cross fixation or just looked at the targets. All encoding conditions yielded similar erratic gaze behavior during recall. Furthermore, encoding mode did not influence recall performance, suggesting that participants, during recall, did not exploit sensorimotor memories related to hand and gaze movements during encoding. Finally, we recorded a similar lose coupling between hand and eye movements during an object manipulation task performed in darkness after participants had viewed the task environment. We conclude that acting on remembered versus visible targets can engage fundamentally different control strategies, with gaze largely decoupled from movement goals during memory-guided actions.  相似文献   

6.
This study investigated whether the execution of an accurate pointing response depends on a prior saccade orientation towards the target, independent of the vision of the limb. A comparison was made between the accuracy of sequential responses (in which the starting position of the hand is known and the eye centred on the target prior to the onset of the hand pointing movement) and synergetic responses (where both hand and gaze motions are simultaneously initiated on the basis of unique peripheral retinal information). The experiments were conducted in visual closed-loop (hand visible during the pointing movement) and in visual openloop conditions (vision of hand interrupted as the hand started to move). The latter condition eliminated the possibility of a direct visual evaluation of the error between hand and target during pointing. Three main observations were derived from the present work: (a) the timing of coordinated eye-head-hand pointing at visual targets can be modified, depending on the executed task, without a deterioration in the accuracy of hand pointing; (b) mechanical constraints or instructions such as preventing eye, head or trunk motion, which limit the redundancy of degrees of freedom, lead to a decrease in accuracy; (c) the synergetic movement of eye, head and hand for pointing at a visible target is not trivially the superposition of eye and head shifts added to hand pointing. Indeed, the strategy of such a coordinated action can modify the kinematics of the head in order to make the movements of both head and hand terminate at approximately the same time. The main conclusion is that eye-head coordination is carried out optimally by a parallel processing in which both gaze and hand motor responses are initiated on the basis of a poorly defined retinal signal. The accuracy in hand pointing is not conditioned by head movement per se and does not depend on the relative timing of eye, head and hand movements (synergetic vs sequential responses). However, a decrease in the accuracy of hand pointing was observed in the synergetic condition, when target fixation was not stabilised before the target was extinguished. This suggests that when the orienting saccade reaches the target before hand movement onset, visual updating of the hand motor control signal may occur. A rapid processing of this final input allows a sharper redefinition of the hand landing point.  相似文献   

7.
Visual information is vital for fast and accurate hand movements. It has been demonstrated that allowing free eye movements results in greater accuracy than when the eyes maintain centrally fixed. Three explanations as to why free gaze improves accuracy are: shifting gaze to a target allows visual feedback in guiding the hand to the target (feedback loop), shifting gaze generates ocular-proprioception which can be used to update a movement (feedback–feedforward), or efference copy could be used to direct hand movements (feedforward). In this experiment we used a double-step task and manipulated the utility of ocular-proprioceptive feedback from eye to head position by removing the second target during the saccade. We confirm the advantage of free gaze for sequential movements with a double-step pointing task and document eye–hand lead times of approximately 200 ms for both initial movements and secondary movements. The observation that participants move gaze well ahead of the current hand target dismisses foveal feedback as a major contribution. We argue for a feedforward model based on eye movement efference as the major factor in enabling accurate hand movements. The results with the double-step target task also suggest the need for some buffering of efference and ocular-proprioceptive signals to cope with the situation where the eye has moved to a location ahead of the current target for the hand movement. We estimate that this buffer period may range between 120 and 200 ms without significant impact on hand movement accuracy.  相似文献   

8.
This study examined two-segment pointing movements with various accuracy constraints to test whether there is segment interdependency in saccadic eye movements that accompany manual actions. The other purpose was to examine how planning of movement accuracy and amplitude for the second pointing influences the timing of gaze shift to the second target at the transition between two segments. Participants performed a rapid two-segment pointing task, in which the first segment had two target sizes, and the second segment had two target sizes and two movement distances. The results showed that duration and peak velocity of the initial pointing were influenced by altered kinematic characteristics of the second pointing due to task manipulations of the second segment, revealing segment interdependency in hand movements. In contrast, saccade duration and velocity did not show such segment interdependency. Thus, unlike hand movements, saccades are planned and organized independently for each segment during sequential manual actions. In terms of the timing of gaze shift to the second target, this was delayed when the initial pointing was made to the smaller first target, indicating that gaze anchoring to the initial target is used to verify the pointing termination. Importantly, the gaze shift was delayed when the second pointing was made to the smaller or farther second target. This suggests that visual information of the hand position at the initial target is important for the planning of movement distance and accuracy of the next pointing. Furthermore, timings of gaze shift and pointing initiation to the second target were highly correlated. Thus, at the transition between two segments, gazes and hand movements are highly coupled in time, which allows the sensorimotor system to process visual and proprioceptive information for the verification of pointing termination and planning of the next pointing.  相似文献   

9.
Summary In human subjects, we investigated the accuracy of goal-directed arm movements performed without sight of the arm; errors of target localization and of motor control thus remained uncorrected by visual feedback, and became manifest as pointing errors. Target position was provided either as retinal eccentricity or as eye position. By comparing the results to those obtained previously with combined retinal plus extraretinal position cues, the relative contribution of the two signals towards visual localization could be studied. When target position was provided by retinal signals, pointing responses revealed an over-estimation of retinal eccentricity which was of similar size for all eccentricities tested, and was independent of gaze direction. These findings were interpreted as a magnification effect of perifoveal retinal areas. When target position was provided as eye position, pointing was characterized by a substantial inter-, and intra-subject variability, suggesting that the accuracy of localization by extraretinal signals is rather limited. In light of these two qualitatively different deficits, possible mechanisms are discussed how the two signals may interact towards a more veridical visual localization.  相似文献   

10.
Encoding of visual target location in extrapersonal space requires convergence of at least three types of information: retinal signals, information about orbital eye positions, and the position of the head on the body. Since the position of gaze is the sum of the head position and the eye position, inaccuracy of spatial localization of the target may result from the sum of the corresponding three levels of errors: retina, ocular and head. In order to evaluate the possible errors evoked at each level, accuracy of target encoding was assessed through a motor response requiring subjects to point with the hand towards a target seen under foveal vision, eliminating the retinal source of error. Subjects had first to orient their head to one of three positions to the right (0, 40, 80°) and maintain this head position while orienting gaze and pointing to one of five target positions (0, 20, 40, 60, 80°). This resulted in 11 combinations of static head and eye positions, and corresponded to five different gaze eccentricities. The accuracy of target pointing was tested without vision of the moving hand. Six subjects were tested. No systematic bias in finger pointing was observed for eye positions ranging from 0 to 40° to the right or left within the orbit. However, the variability (as measured by a surface error) given by the scatter of hand pointing increased quadratically with eye eccentricity. A similar observation was made with the eye centreed and the head position ranging from 0 to 80°, although the surface error increased less steeply with eccentricity. Some interaction between eye and head eccentricity also contributed to the pointing error. These results suggest that pointing should be most accurate with a head displacement corresponding to 90% of the gaze eccentricity. These results explain the systematic hypometry of head orienting towards targets observed under natural conditions: thus the respective contribution of head and eye to gaze orientation might be determined in order to optimize accuracy of target encoding.  相似文献   

11.
The aim of this investigation was to gain further insight into control strategies used for whole body reaching tasks. Subjects were requested to step and reach to remembered target locations in normal room lighting (LIGHT) and complete darkness (DARK) with their gaze directed toward or eccentric to the remembered target location. Targets were located centrally at three different heights. Eccentric anchors for gaze direction were located at target height and initial target distance, either 30° to the right or 20° to the left of target location. Control trials, where targets remained in place, and remembered target trials were randomly presented. We recorded movements of the hand, eye and head, while subjects stepped and reached to real or remembered target locations. Lateral, vertical and anterior–posterior (AP) hand errors and eye location, and gaze direction deviations were determined relative to control trials. Final hand location errors varied by target height, lighting condition and gaze eccentricity. Lower reaches in the DARK compared to the LIGHT condition were common, and when matched with a tendency to reach above the low target, help explain more accurate reaches for this target in darkness. Anchoring the gaze eccentrically reduced hand errors in the AP direction and increased errors in the lateral direction. These results could be explained by deviations in eye locations and gaze directions, which were deemed significant predictors of final reach errors, accounting for a 17–47% of final hand error variance. Results also confirmed a link between gaze deviations and hand and head displacements, suggesting that gaze direction is used as a common input for movement of the hand and body. Additional links between constant and variable eye deviations and hand errors were common for the AP direction but not for lateral or vertical directions. When combined with data regarding hand error predictions, we found that subjectsȁ9 alterations in body movement in the AP direction were associated with AP adjustments in their reach, but final hand position adjustments were associated with gaze direction alterations for movements in the vertical and horizontal directions. These results support the hypothesis that gaze direction provides a control signal for hand and body movement and that this control signal is used for movement direction and not amplitude.  相似文献   

12.
Errors in pointing to actual and remembered targets presented in three-dimensional (3D) space in a dark room were studied under various conditions of visual feedback. During their movements, subjects either had no vision of their arms or of the target, vision of the target but not of their arms, vision of a light-emitting diode (LED) on their moving index fingertip but not of the target, or vision of an LED on their moving index fingertip and of the target. Errors depended critically upon feedback condition. 3D errors were largest for movements to remembered targets without visual feedback, diminished with vision of the moving fingertip, and diminished further with vision of the target and vision of the finger and the target. Moreover, the different conditions differentially influenced the radial distance, azimuth, and elevation errors, indicating that subjects control motion along all three axes relatively independently. The pattern of errors suggest that the neural systems that mediate processing of actual versus remembered targets may have different capacities for integrating visual and proprioceptive information in order to program spatially directed arm movements.  相似文献   

13.
14.
A well-coordinated pattern of eye and hand movements can be observed during goal-directed arm movements. Typically, a saccadic eye movement precedes the arm movement, and its occurrence is temporally correlated with the start of the arm movement. Furthermore, the coupling of gaze and aiming movements is also observable after pointing initiation. It has recently been observed that saccades cannot be directed to new target stimuli, away from a pointing target stimulus. Saccades directed to targets presented during the final phase of a pointing movement were delayed until after pointing movement offset ("gaze anchoring"). The present study investigated whether ocular gaze is anchored to a pointing target during the entire pointing movement. In experiment 1, new targets were presented at various times during the duration of a pointing movement, triggered by the kinematics arm moment itself (movement onset, peak acceleration/velocity/deceleration, and offset). Subjects had to make a saccade to the new target as fast as possible while maintaining the pointing movement to the initial target. Saccadic latencies were increased by an amount of time that approximately equaled the remaining pointing time after saccadic target presentation, with the majority of saccades executed after pointing movement offset. The nature of the signal driving gaze stabilization during pointing was investigated in experiment 2. In previous experiments where ocular gaze was anchored to a pointing target, subjects could always see their moving arm, thus it was unknown whether a visual image of the moving arm, an afferent (proprioceptive) signal or an efferent (motor control related) signal produced gaze anchoring. In experiment 2 subjects had to point with or without vision of the moving arm to test whether a visual signal is used to anchor gaze to a pointing target. Results indicate that gaze anchoring was also observed without vision of the moving arm. The findings support the existence of a mechanism enforcing ocular gaze anchoring during the entire duration of a pointing movement. Moreover, such a mechanism uses an internally generated, or proprioceptive, nonvisual signal. Possible neural substrates underlying these processes are discussed, as well as the role of selective attention.  相似文献   

15.
We previously reported that Parkinson's disease patients could point with their eyes closed as accurately as normal subjects to targets in three-dimensional space that were initially presented with full vision. We have now further restricted visual information in order to more closely examine the individual and combined influences of visual information, proprioceptive feedback, and spatial working memory on the accuracy of Parkinson's disease patients. All trials were performed in the dark. A robot arm presented a target illuminated by a light-emitting diode at one of five randomly selected points composing a pyramidal array. Subjects attempted to "touch" the target location with their right finger in one smooth movement in three conditions: dark, no illumination of arm or target during movement; movement was to the remembered target location after the robot arm retracted; finger, a light-emitting diode on the pointing fingertip was visible during the movement but the target was extinguished; again, movement was to the remembered target location; and target, the target light-emitting diode remained in place and visible throughout the trial but there was no vision of the arm. In the finger condition, there is no need to use visual-proprioceptive integration, since the continuously visualized fingertip position can be compared to the remembered location of the visual target. In the target condition, the subject must integrate the current visible target with arm proprioception, while in the dark condition, the subject must integrate current proprioception from the arm with the remembered visual target. Parkinson's disease patients were significantly less accurate than controls in both the dark and target conditions, but as accurate as controls in the finger condition. Parkinson's disease patients, therefore, were selectively impaired in those conditions (target and dark) which required integration of visual and proprioceptive information in order to achieve accurate movements. In contrast, the patients' normal accuracy in the finger condition indicates that they had no substantial deficits in their relevant spatial working memory. Final arm configurations were significantly different in the two subject groups in all three conditions, even in the finger condition where mean movement endpoints were not significantly different. Variability of the movement endpoints was uniformly increased in Parkinson's disease patients across all three conditions.The current study supports an important role for the basal ganglia in the integration of proprioceptive signals with concurrent or remembered visual information that is needed to guide movements. This role can explain much of the patients' dependence on visual information for accuracy in targeted movements. It also underlines what may be an essential contribution of the basal ganglia to movement, the integration of afferent information that is initially processed through multiple, discrete modality-specific pathways, but which must be combined into a unified and continuously updated spatial model for effective, accurate movement.  相似文献   

16.
Predictive remapping of visual features precedes saccadic eye movements   总被引:1,自引:0,他引:1  
The frequent occurrence of saccadic eye movements raises the question of how information is combined across separate glances into a stable, continuous percept. Here I show that visual form processing is altered at both the current fixation position and the location of the saccadic target before the saccade. When human observers prepared to follow a displacement of the stimulus with the eyes, visual form adaptation was transferred from current fixation to the future gaze position. This transfer of adaptation also influenced the perception of test stimuli shown at an intermediate position between fixation and saccadic target. Additionally, I found a presaccadic transfer of adaptation when observers prepared to move their eyes toward a stationary adapting stimulus in peripheral vision. The remapping of visual processing, demonstrated here with form adaptation, may help to explain our impression of a smooth transition, with no temporal delay, of visual perception across glances.  相似文献   

17.
Eye-hand coordination requires the brain to integrate visual information with the continuous changes in eye, head, and arm positions. This is a geometrically complex process because the eyes, head, and shoulder have different centers of rotation. As a result, head rotation causes the eye to translate with respect to the shoulder. The present study examines the consequences of this geometry for planning accurate arm movements in a pointing task with the head at different orientations. When asked to point at an object, subjects oriented their arm to position the fingertip on the line running from the target to the viewing eye. But this eye-target line shifts when the eyes translate with each new head orientation, thereby requiring a new arm pointing direction. We confirmed that subjects do realign their fingertip with the eye-target line during closed-loop pointing across various horizontal head orientations when gaze is on target. More importantly, subjects also showed this head-position-dependent pattern of pointing responses for the same paradigm performed in complete darkness. However, when gaze was not on target, compensation for these translations in the rotational centers partially broke down. As a result, subjects tended to overshoot the target direction relative to current gaze; perhaps explaining previously reported errors in aiming the arm to retinally peripheral targets. These results suggest that knowledge of head position signals and the resulting relative displacements in the centers of rotation of the eye and shoulder are incorporated using open-loop mechanisms for eye-hand coordination, but these translations are best calibrated for foveated, gaze-on-target movements.  相似文献   

18.
 We investigated whether and how adaptive changes in saccadic amplitudes (short-term saccadic adaptation) modify hand movements when subjects are involved in a pointing task to visual targets without vision of the hand. An experiment consisted of the pre-adaptation test of hand pointing (placing the finger tip on a LED position), a period of adaptation, and a post-adaptation test of hand pointing. In a basic task (transfer paradigm A), the pre- and post-adaptation trials were performed without accompanying eye and head movements: in the double-step gaze adaptation task, subjects had to fixate a single, suddenly displaced visual target by moving eyes and head in a natural way. Two experimental sessions were run with the visual target jumping during the saccades, either backwards (from 30 to 20°, gaze saccade shortening) or onwards (30 to 40°, gaze saccade lengthening). Following gaze-shortening adaptation (level of adaptation 79±10%, mean and s.d.), we found a statistically significant shift (t-test, error level P<0.05) in the final hand-movement points, possibly due to adaptation transfer, representing 15.2% of the respective gaze adaptation. After gaze-lengthening adaptation (level of adaptation 92±17%), a non-significant shift occurred in the opposite direction to that expected from adaptation transfer. The applied computations were also performed on some data of an earlier transfer paradigm (B, three target displacements at a time) with gain shortening. They revealed a significant transfer relative to the amount of adaptation of 18.5±17.5% (P<0.05). In the coupling paradigm (C), we studied the influence of gaze saccade adaptation of hand-pointing movements with concomitant orienting gaze shifts. The adaptation levels achieved were 59±20% (shortening) and 61±27% (lengthening). Shifts in the final fingertip positions were congruent with internal coupling between gaze and hand, representing 53% of the respective gaze-amplitude changes in the shortening session and 6% in the lengthening session. With an adaptation transfer of less than 20% (paradigm A and B), we concluded that saccadic adaptation does not ”automatically” produce a functionally meaningful change in the skeleto-motor system controlling hand-pointing movements. In tasks with concomitant gaze saccades (coupling paradigm C), the modification of hand pointing by the adapted gaze comes out more clearly, but only in the shortening session. Received: 9 February 1998 / Accepted: 18 August 1998  相似文献   

19.
The present study examined the effect of timing constraints and advance knowledge on eye–hand coordination strategy in a sequential pointing task. Participants were required to point at two successively appearing targets on a screen while the inter-stimulus interval (ISI) and the trial order were manipulated, such that timing constraints were high (ISI = 300 ms) or low (ISI = 450 ms) and advance knowledge of the target location was present (fixed order) or absent (random order). Analysis of eye and finger onset and completion times per segment of the sequence indicated that oculo-manual behaviour was in general characterized by eye movements preceding the finger, as well as ‘gaze anchoring’ (i.e. eye fixation of the first target until completion of the finger movement towards that target). Advance knowledge of future target locations lead to shorter latency times of eye and hand, and smaller eye–hand lead times, which in combination resulted in shorter total movement times. There was, however, no effect of advance knowledge on the duration of gaze anchoring. In contrast, gaze anchoring did change as a function of the interval between successive stimuli and was shorter with a 300 ms ISI versus 450 ms ISI. Further correlation analysis provided some indication that shorter residual latency is associated with shorter pointing duration, without affecting accuracy. These results are consistent with a neural mechanism governing the coupling of eye and arm movements, which has been suggested to reside in the superior colliculus. The temporal coordination resulting from this coupling is a function of the time pressure on the visuo-manual system resulting from the appearance of external stimuli.  相似文献   

20.
Ocular gaze is anchored to the target of an ongoing pointing movement   总被引:13,自引:0,他引:13  
It is well known that, typically, saccadic eye movements precede goal-directed hand movements to a visual target stimulus. Also pointing in general is more accurate when the pointing target is gazed at. In this study, it is hypothesized that saccades are not only preceding pointing but that gaze also is stabilized during pointing in humans. Subjects, whose eye and pointing movements were recorded, had to make a hand movement and a saccade to a first target. At arm movement peak velocity, when the eyes are usually already fixating the first target, a new target appeared, and subjects had to make a saccade toward it (dynamical trial type). In the statical trial type, a new target was offered when pointing was just completed. In a control experiment, a sequence of two saccades had to be made, with two different interstimulus intervals (ISI), comparable with the ISIs found in the first experiment for dynamic and static trial types. In a third experiment, ocular fixation position and pointing target were dissociated, subjects pointed at not fixated targets. The results showed that latencies of saccades toward the second target were on average 155 ms longer in the dynamic trial types, compared with the static trial types. Saccades evoked during pointing appeared to be delayed with approximately the remaining deceleration time of the pointing movement, resulting in "normal" residual saccadic reaction times (RTs), measured from pointing movement offset to saccade movement onset. In the control experiment, the latency of the second saccade was on average only 29 ms larger when the two targets appeared with a short ISI compared with trials with long ISIs. Therefore the saccadic refractory period cannot be responsible for the substantially bigger delays that were found in the first experiment. The observed saccadic delay during pointing is modulated by the distance between ocular fixation position and pointing target. The largest delays were found when the targets coincided, the smallest delays when they were dissociated. In sum, our results provide evidence for an active saccadic inhibition process, presumably to keep steady ocular fixation at a pointing target and its surroundings. Possible neurophysiological substrates that might underlie the reported phenomena are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号