首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Mammals with foveas (or analogous retinal specializations) frequently shift gaze without moving the head, and their behavior contrasts sharply with afoveate mammals, in which eye and head movements are strongly coupled. The ability to move the eyes without moving the head could reflect a gating mechanism that blocks a default eye-head synergy when an attempted head movement would be energetically wasteful. Based upon such considerations of efficiency, we predicted that for saccades to targets lying within the ocular motor range, the tendency to generate a head movement would depend upon a subjects expectations regarding future directions of gaze. We tested this hypothesis in two experiments with normal human subjects instructed to fixate sequences of lighted targets on a semicircular array. In the target direction experiment, we determined whether subjects were more likely to move the head during a small gaze shift if they expected that they would be momentarily required to make a second, larger shift in the same direction. Adding the onward-directed target increased significantly the distribution of final head positions (customary head orientation range, CHOR) observed during fixation of the primary target from 16.6±4.9° to 25.2±7.8°. The difference reflected an increase in the probability, and possibly the amplitude, of head movements. In the target duration experiment, we determined whether head movements were potentiated when subjects expected that gaze would be held in the vicinity of the target for a longer period of time. Prolonging fixation increased CHOR significantly from 53.7±18.8° to 63.2±15.9°. Larger head movements were evoked for any given target eccentricity, due to a narrowing in the gap between the x-intercepts of the head amplitude:target eccentricity relationship. The results are consistent with the idea that foveate mammals use knowledge of future gaze direction to influence the coupling of saccadic commands to premotor circuitry of the head. While the circuits ultimately mediating the coupling may lie within the brainstem, our results suggest that the cerebrum plays a supervisory role, since it is a likely seat of expectation regarding target behavior. Eye-head coupling may reflect separate gating and scaling mechanisms, and changes in head movement tendencies may reflect parametric modulation of either mechanism.  相似文献   

2.
Summary Saccade characteristics have been studied during coordinated eyehead movements in monkeys. Amplitude, duration, and peak velocity of saccades with head turning were compared with saccades executed while the head was artificially restrained. The results indicate that the saccade characteristics are modulated as a function of head movement, hence the gaze movement (eye+head) exactly matches saccades with head fixed. Saccade modulation is achieved by way of negative vestibulo-ocular feedback. The neck proprioceptors, because of their longer latency, are effective only if the head starts moving prior to the onset of saccade. It is concluded that saccades made with head turning are not ballistic movements because their trajectory is not entirely predetermined by a central command.  相似文献   

3.
Summary The accuracy of pointing movements of the hand, directed at visual targets 10° to 40° from the midline, was measured in normal human subjects. No visual feedback from the moving hand was available to the subjects. The head could be either maintained stationary (head-fixed condition) or free to move (head-free condition) during the pointing movements. It was found that the error in pointing was reduced for all targets in the head-free condition. This reduction was more important for the more eccentric target (40°). Improvement in accuracy was observed without any significant change in either the latency or the duration of eye, head or hand movements. In the head-free condition, it was found that the head was displaced in the direction of the target by an amount representing no more than 2/3 of the target amplitude. The improvement in accuracy was not influenced by the amplitude of the head movement. A model is proposed which shows how coordinated eye and head movements could improve the encoding of target position.Work supported by INSERM-Unité 94  相似文献   

4.
Latencies of eye movements to peripheral targets are reduced when there is a short delay (typically 200 ms) between the offset of a central visual fixation point and the target onset. This has been termed the gap effect. In addition, some subjects, usually with practice, exhibit a separate population of very short latency saccades, called express saccades. Both these phenomena have been attributed to disengagement of visual attention when the fixation point is extinguished. A competing theory of the gap effect attributes it to disengagement of oculomotor fixation during the temporal gap. It is known that auditory targets are effective in eliciting saccadic eye movements, and also that covert attention operates in the auditory modality. If the gap effect and express saccades are due to disengagement of spatial attention, both should persist in the auditory modality. However, fixation of gaze is largely under visual control. If the gap effect results from disengagement of fixation, then at least a reduced effect should be seen in the auditory modality. Human subjects performed the gap task and a control task in the dark, using auditory fixation points and saccadic targets, on five successive days. Despite this practice, express saccades were not observed. There was a reliable gap effect, but the reduction in saccadic latency was only 17 ms, compared with 32 ms for the same subjects in the visual modality. This suggests that about half the gap effect is due to disengagement of visual fixation. The remainder was not due to non-specific warning effects and could be attributed to offset of the auditory fixation stimulus. Received: 1 March 1996 / Accepted: 11 July 1997  相似文献   

5.
It is well known that the removal of a fixation point prior to the presentation of a peripheral target dramatically reduces saccadic reaction time (SRT). This effect has become known as the “gap effect”. The present study examined several detailed kinematic variables to determine whether the removal of the fixation point also affects the manner in which saccades are produced. The findings indicate that saccades that were initiated after the removal of the fixation point had higher average velocities and reached greater peak velocities, accelerations, and decelerations than did saccades produced in the presence of the fixation point. The results suggest that the removal of the fixation point may affect the force-time curves of saccades in addition to affecting the time needed to initiate the saccades. Received: 21 February 1997 / Accepted: 24 July 1997  相似文献   

6.
Summary The otolith contribution and otolith-visual interaction in eye and head stabilization were investigated in alert cats submitted to sinusoidal linear accelerations in three defined directions of space: up-down (Z motion), left-right (Y motion), and forward-back (X motion). Otolith stimulation alone was performed in total darkness with stimulus frequency varying from 0.05 to 1.39 Hz at a constant half peak-to-peak amplitude of 0.145 m (corresponding acceleration range 0.0014–1.13 g) Optokinetic stimuli were provided by sinusoidally moving a pseudorandom visual pattern in the Z and Y directions, using a similar half peak-to-peak amplitude (0.145 m, i.e., 16.1°) in the 0.025–1.39 Hz frequency domain (corresponding velocity range 2.5°–141°/s). Congruent otolith-visual interaction (costimulation, CS) was produced by moving the cat in front of the earth-stationary visual pattern, while conflicting interaction was obtained by suppressing all visual motion cues during linear motion (visual stabilization method, VS, with cat and visual pattern moving together, in phase). Electromyographic (EMG) activity of antagonist neck extensor (splenius capitis) and flexor (longus capitis) muscles as well as horizontal and vertical eye movements (electrooculography, EOG) were recorded in these different experimental conditions. Results showed that otolith-neck (ONR) and otolith-ocular (OOR) responses were produced during pure otolith stimulation with relatively weak stimuli (0.036 g) in all directions tested. Both EMG and EOG response gain slightly increased, while response phase lead decreased (with respect to stimulus velocity) as stimulus frequency increased in the range 0.25–1.39 Hz. Otolith contribution to compensatory eye and neck responses increased with stimulus frequency, leading to EMG and EOG responses, which oppose the imposed displacement more and more. But the otolith system alone remained unable to produce perfect compensatory responses, even at the highest frequency tested. In contrast, optokinetic stimuli in the Z and Y directions evoked consistent and compensatory eye movement responses (OKR) in a lower frequency range (0.025–0.25 Hz). Increasing stimulus frequency induced strong gain reduction and phase lag. Oculo-neck coupling or eye-head synergy was found during optokinetic stimulation in the Z and Y directions. It was characterized by bilateral activation of neck extensors and flexors during upward and downward eye movements, respectively, and by ipsilateral activation of neck muscles during horizontal eye movements. These visually-induced neck responses seemed related to eye velocity signals. Dynamic properties of neck and eye responses were significantly improved when both inputs were combined (CS). Near perfect compensatory eye movement and neck muscle responses closely related to stimulus velocity were observed over all frequencies tested, in the three directions defined. The present study indicates that eye-head coordination processes during linear motion are mainly dependent on the visual system at low frequencies (below 0.25 Hz), with close functional coupling of OKR and eye-head synergy. The otolith system basically works at higher stimulus frequencies and triggers Synergist OOR and ONR. However, both sensorimotor subsystems combine their dynamic properties to provide better eyehead coordination in an extended frequency range and, as evidenced under VS condition, visual and otolith inputs also contribute to eye and neck responses at high and low frequency, respectively. These general laws on functional coupling of the eye and head stabilizing reflexes during linear motion are valid in the three directions tested, even though the relative weight of visual and otolith inputs may vary according to motion direction and/or kinematics.  相似文献   

7.
 Recent neurophysiological studies of the saccadic ocular motor system have lent support to the hypothesis that this system uses a motor error signal in retinotopic coordinates to direct saccades to both visual and auditory targets. With visual targets, the coordinates of the sensory and motor error signals will be identical unless the eyes move between the time of target presentation and the time of saccade onset. However, targets from other modalities must undergo different sensory-motor transformations to access the same motor error map. Because auditory targets are initially localized in head-centered coordinates, analyzing the metrics of saccades from different starting positions allows a determination of whether the coordinates of the motor signals are those of the sensory system. We studied six human subjects who made saccades to visual or auditory targets from a central fixation point or from one at 10° to the right or left of the midline of the head. Although the latencies of saccades to visual targets increased as stimulus eccentricity increased, the latencies of saccades to auditory targets decreased as stimulus eccentricity increased. The longest auditory latencies were for the smallest values of motor error (the difference between target position and fixation eye position) or desired saccade size, regardless of the position of the auditory target relative to the head or the amplitude of the executed saccade. Similarly, differences in initial eye position did not affect the accuracy of saccades of the same desired size. When saccadic error was plotted as a function of motor error, the curves obtained at the different fixation positions overlapped completely. Thus, saccadic programs in the central nervous system compensated for eye position regardless of the modality of the saccade target, supporting the hypothesis that the saccadic ocular motor system uses motor error signals to direct saccades to auditory targets. Received: 8 September 1995 / Accepted: 22 November 1996  相似文献   

8.
Summary In unrestrained animals of many species, electrical stimulation at sites in the superior colliculus evokes motions of the head and eyes. Collicular stimulation in monkeys whose heads are rigidly fixed is known to elicit a saccade whose characteristics depend on the site stimulated and are largely independent of electrical stimulation parameters and initial eye position.This study examined what role the colliculus plays in the coding of head movements. A secondary aim was to demonstrate the effects of such electrical stimulation parameters as pulse frequency and intensity. Rhesus monkeys were free to move their heads in the horizontal plane; head and eye movements were monitored. As in previous studies, eye movements evoked by collicular stimulation were of short latency, repeatable, had a definite electrical threshold, and did not depend on the initial position of the eye in the orbit. By contrast, evoked head movements were extremely variable in size and latency, had no definite electrical threshold, and did depend on initial eye position. Thus when the eyes approached positions of extreme deviation, a head movement in the same direction became more likely. These results suggest that the superior colliculus does not directly code head movements in the monkey.  相似文献   

9.
We investigated the ability to adjust to nonlinear transformations that allow people to control external systems like machines and tools. Earlier research (Verwey and Heuer 2007) showed that in the presence of just terminal feedback participants develop an internal model of such transformations that operates at a relatively early processing level (before or at amplitude specification). In this study, we investigated the level of operation of the internal model after practicing with continuous visual feedback. Participants executed rapid aiming movements, for which a nonlinear relationship existed between the target amplitude seen on the computer screen and the required movement amplitude of the hand on a digitizing tablet. Participants adjusted to the external transformation by developing an internal model. Despite continuous feedback, explicit awareness of the transformation did not develop and the internal model still operated at the same early processing level as with terminal feedback. Thus with rapid aiming movements, the type of feedback may not matter for the locus of operation of the internal model.  相似文献   

10.
We used a 61-channel electrode array to investigate the spatiotemporal dynamics of electroencephalographic (EEG) activity related to behavioral transitions in rhythmic sensorimotor coordination. Subjects were instructed to maintain a 1:1 relationship between repeated right index finger flexion and a series of periodically delivered tones (metronome) in a syncopated (anti-phase) fashion. Systematic increases in stimulus presentation rate are known to induce a spontaneous switch in behavior from syncopation to synchronization (in-phase coordination). We show that this transition is accompanied by a large-scale reorganization of cortical activity manifested in the spatial distributions of EEG power at the coordination frequency. Significant decreases in power were observed at electrode locations over left central and anterior parietal areas, most likely reflecting reduced activation of left primary sensorimotor cortex. A second condition in which subjects were instructed to synchronize with the metronome controlled for the effects of movement frequency, since synchronization is known to remain stable across a wide range of frequencies. Different, smaller spatial differences were observed between topographic patterns associated with synchronization at low versus high stimulus rates. Our results demonstrate qualitative changes in the spatial dynamics of human brain electrical activity associated with a transition in the timing of sensorimotor coordination and suggest that maintenance of a more difficult anti-phase timing relation is associated with greater activation of primary sensorimotor areas. Received: 3 September 1998 / Accepted: 3 March 1999  相似文献   

11.
During linear accelerations, compensatory reflexes should continually occur in order to maintain objects of visual interest as stable images on the retina. In the present study, the three-dimensional organization of the vestibulo-ocular reflex in pigeons was quantitatively examined during linear accelerations produced by constant velocity off-vertical axis yaw rotations and translational motion in darkness. With off-vertical axis rotations, sinusoidally modulated eye-position and velocity responses were observed in all three components, with the vertical and torsional eye movements predominating the response. Peak torsional and vertical eye positions occurred when the head was oriented with the lateral visual axis of the right eye directed orthogonal to or aligned with the gravity vector, respectively. No steady-state horizontal nystagmus was obtained with any of the rotational velocities (8–58°/s) tested. During translational motion, delivered along or perpendicular to the lateral visual axis, vertical and torsional eye movements were elicited. No significant horizontal eye movements were observed during lateral translation at frequencies up to 3 Hz. These responses suggest that, in pigeons, all linear accelerations generate eye movements that are compensatory to the direction of actual or perceived tilt of the head relative to gravity. In contrast, no translational horizontal eye movements, which are known to be compensatory to lateral translational motion in primates, were observed under the present experimental conditions. Received: 29 January 1999 / Accepted: 14 June 1999  相似文献   

12.
 This paper reports a striking misperception associated with involuntary saccadic eye movements: when subjects are instructed to look to the opposite side of a suddenly presented stimulus (antisaccade), they produce a certain number of involuntary prosaccades to the stimulus before they move their eyes to the other side by a corrective saccade of approximately twice the size. When asked to indicate at the end of each trial whether they believed that they made such a detour sequence of two saccades, one finds that, on average, 50±25% of these involuntary movements are not recognized. The average size and correction time for recognized prosaccades is larger than unrecognized prosaccades, while their mean reaction times are the same. The corrective saccades compensate for the size of both the recognized and unrecognized errors. When similar sequences of saccades are made voluntarily, the time spent at the stimulus side was 222 ms compared with 95 ms for unrecognized and 145 ms for recognized errors. The distributions of the corresponding correction times differ in their multimodal composition. Whether voluntary and involuntary saccades and their corrections are associated with different effects on the updating of the perceptual spatial frame and attention allocation is discussed. Received: 16 October 1998 / Accepted: 2 December 1998  相似文献   

13.
Two different models have argued that neglect of contralateral stimuli following brain damage might be associated with either a compressed or an anisometric neural representation of space along the earth-horizontal axis. We tested these models by determining neglect patients’ perception of spatial distances in the horizontal plane. We found no evidence for any compression or expansion or for anisometry along the earth-horizontal axis. The findings argue against a distortion of subjective space along the horizontal axis in patients with neglect, which could account for their failure to orient towards and to explore the contralesional parts of space.  相似文献   

14.
This study examined (1) how changes in head position affect postural orientation variables during stance and (2) whether changes in head position affect the rapid postural response to linear translation of the support surface in the horizontal plane. Cats were trained to stand quietly on a moveable platform and to maintain five different head positions: center, left, right, up, and down. For each head position, stance was perturbed by translating the support surface linearly in 16 different directions in the horizontal plane. Postural equilibrium responses were quantified in terms of the ground reaction forces, kinematics, dynamics (net joint torques), body center of mass, and electromyographic (EMG) responses of selected limb and trunk muscles. A change in head position involved rotation of not only the neck but also the scapulae and anterior trunk. Tonic EMG levels were modulated in several forelimb and scapular muscles but not hindlimb muscles. Finally, large changes in head orientation in both horizontal and vertical planes did not hamper the ability of cats to maintain postural equilibrium during linear translation of the support surface. The trajectory of the body’s center of mass was the same, regardless of head position. The main change was observed in joint torques at the forelimbs evoked by the perturbation. Evoked EMG responses of forelimb and scapular muscles were modulated in terms of magnitude but not spatial tuning. Hindlimb responses were unchanged. Thus, the spatial and temporal pattern of the automatic postural response was unchanged and only amplitudes of evoked activity were modulated by head position. Received: 14 October 1997 / Accepted: 22 April 1998  相似文献   

15.
Binocular information has been shown to be important for the programming and control of reaching and grasping. But even without binocular vision, people are still able to reach out and pick up objects accurately – albeit less efficiently. As part of a continuing investigation into the role that monocular cues play in visuomotor control, we examined whether or not subjects could use retinal motion information, derived from movements of the head, to help program and control reaching and grasping movements when binocular vision is denied. Subjects reached out in the dark to an illuminated sphere presented at eye-level, under both monocular and binocular viewing conditions with their head either free to move or restrained. When subjects viewed the display monocularly, they showed fewer on-line corrections when they were allowed to move their head. No such difference in performance was seen when subjects were allowed a full binocular view. This study, combined with previous work with neurological patients, confirms that the visuomotor system “prefers” to use binocular vision but, when this information is not available, can fall back on other monocular depth cues, such as information produced by motion of the object (and the scene) on the retina, to help program and control manual prehension. Received: 22 December 1997 / Accepted: 4 February 1998  相似文献   

16.
We have studied the effects of pursuit eye movements on the functional magnetic resonance imaging (fMRI) responses in extrastriate visual areas during visual motion perception. Echoplanar imaging of 10–12 image planes through visual cortex was acquired in nine subjects while they viewed sequences of random-dot motion. Images obtained during stimulation periods were compared with baseline images, where subjects viewed a blank field. In a subsidiary experiment, responses to moving dots, viewed under conditions of fixation or pursuit, were compared with those evoked by static dots. Eye movements were recorded with MR-compatible electro-oculographic (EOG) electrodes. Our findings show an enhanced level of activation (as indexed by blood-oxygen level-dependent contrast) during pursuit compared with fixation in two extrastriate areas. The results support earlier findings on a motion-specific area in lateral occipitotemporal cortex (human V5). They also point to a further site of activation in a region approximately 12 mm dorsal of V5. The fMRI response in V5 during pursuit is significantly enhanced. This increased response may represent additional processing demands required for the control of eye movements. Received: 16 July 1997 / Accepted: 14 October 1997  相似文献   

17.
Two experiments were conducted to examine the interactions between the ocular and manual systems during rapid goal-directed movements. A point-light array was used to generate Müller-Lyer configuration target endpoints (in-Müller, out-Müller, ’X’) for 30 cm aiming movements. Vision (of the limb and target), eye position, and the concurrence of eye movement were varied to manipulate the availability of retinal and extraretinal information. In addition, the Müller-Lyer endpoints were used to generate predictable biases in accuracy of these information channels. Although saccadic amplitude was consistently biased, manual bias in response to illusory targets only occurred in trials with concurrent eye movement and elimination of retinal target information on limb movement initiation; covariation of eye and hand displacement was also most prevalent in these trials. Contrary to previous findings, there was no temporal relation between eye and hand movements. In addition to any role in coordinated eye-hand action, the availability of vision of both the limb and target again had strong performance benefits for rapid manual aiming. Received: 14 August 1998 / Accepted: 8 March 1999  相似文献   

18.
 Prehension movements of the right hand were recorded in normal subjects using a computerized motion analyzer. The kinematics and the spatial paths of markers placed at the wrist and at the tips of the index finger and thumb were measured. Cylindrical objects of different diameters (3, 6, 9 cm) were used as targets. They were placed at six different positions in the workspace along a circle centered on subject’s head axis. The positions were spaced by 10° starting from 10° on the left of the sagittal axis, up to 40° on the right. Both the transport and the grasp components of prehension were influenced by the distance between the resting hand position and the object position. Movement time, time to peak velocity of the wrist and time to maximum grip aperture varied as a function of distance from the object, irrespective of its size. The variability of the spatial paths of wrist and fingers sharply decreased during the phase of the movement prior to contact with the object. This indicates that the final position of the thumb and the index finger is a controlled parameter of visuomotor transformation during prehension. The orientation of the opposition axis (defined as the line connecting the tips of the thumb and the index finger at the end of the movement) was measured. Several different frames of reference were used. When an object-centered frame was used, the orientation of the opposition axis was found to change by about 10° from one object position to the next. By contrast, when a body-centered frame was used (with the head or the forearm as a reference), this orientation was found to remain relatively invariant for different object positions and sizes. The degree of wrist flexion was little affected by the position of the object. This result, together with the invariant orientation of the opposition axis, shows that prehension movements aimed at cylindrical objects are organized so as to minimize changes in posture of the lower arm. Received: 2 July 1996 / Accepted: 5 October 1996  相似文献   

19.
 When the eyes and arm are involved in a tracking task, the characteristics of each system differ from those observed when they act alone: smooth pursuit (SP) latency decreases from 130 ms in external target tracking tasks to 0 ms in self-moved target tracking tasks. Two models have been proposed to explain this coordination. The common command model suggests that the same command be addressed to the two sensorimotor systems, which are otherwise organized in parallel, while the coordination control model proposes that coordination is due to a mutual exchange of information between the motor systems. In both cases, the interaction should take into account the dynamic differences between the two systems. However, the nature of the adaptation depends on the model. During self-moved target tracking a perturbation was applied to the arm through the use of an electromagnetic brake. A randomized perturbation of the arm increased the arm motor reaction time without affecting SP. In contrast, a constant perturbation produced an adaptation of the coordination control characterized by a decrease in arm latency and an increase in SP latency relative to motor command. This brought the arm-to-SP latency back to 0 ms. These results support the coordination control model. Received: 24 November 1997 / Accepted: 1 July 1998  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号