首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The integration of auditory and visual spatial information is an important prerequisite for accurate orientation in the environment. However, while visual spatial information is based on retinal coordinates, the auditory system receives information on sound location in relation to the head. Thus, any deviation of the eyes from a central position results in a divergence between the retinal visual and the head-centred auditory coordinates. It has been suggested that this divergence is compensated for by a neural coordinate transformation, using a signal of eye-in-head position. Using functional magnetic resonance imaging, we investigated which cortical areas of the human brain participate in such auditory-visual coordinate transformations. Sounds were produced with different interaural level differences, leading to left, right or central intracranial percepts, while subjects directed their gaze to visual targets presented to the left, to the right or straight ahead. When gaze was to the left or right, we found the primary visual cortex (V1/V2) activated in both hemispheres. The occipital activation did not occur with sound lateralization per se, but was found exclusively in combination with eccentric eye positions. This result suggests a relation of neural processing in the visual cortex and the transformation of auditory spatial coordinates responsible for maintaining the perceptual alignment of audition and vision with changes in gaze direction.  相似文献   

2.
A sensorimotor neuron's receptive field and its frame of reference are easily conflated within the natural variability of spatial behavior. Here, we capitalized on such natural variations in 3-D eye and head positions during head-unrestrained gaze shifts to visual targets in two monkeys: to determine whether intermediate/deep layer superior colliculus (SC) receptive fields code visual targets or gaze kinematics, within four different frames of reference. Visuomotor receptive fields were either characterized during gaze shifts to visual targets from a central fixation position (32 U) or were partially characterized from each of three initial fixation points (31 U). Natural variations of initial 3-D gaze and head orientation (including torsion) provided spatial separation between four different coordinate frame models (space, head, eye, fixed-vector relative to fixation), whereas natural saccade errors provided spatial separation between target and gaze positions. Using a new statistical method based on predictive sum-of-squares, we found that in our population of 63 neurons (1) receptive field fits to target positions were significantly better than fits to actual gaze shift locations and (2) eye-centered models gave significantly better fits than the head or space frame. An intermediate frames analysis confirmed that individual neuron fits were distributed target-in-eye coordinates. Gaze position "gain" effects with the spatial tuning required for a 3-D reference frame transformation were significant in 23% (7/31) of neurons tested. We conclude that the SC primarily represents gaze targets relative to the eye but also carries early signatures of the 3-D sensorimotor transformation.  相似文献   

3.
Head movement latencies are greater than eye movement latencies because of dynamic biomechanical lags. Our EMG recordings show equal latencies in the controller signals to eye movement and to head movement. The increased dynamic lag of head movement leads to the classical gaze pattern. First a rapid saccadic eye movement directs gaze onto target; a slower head movement follows. Its accompanying vestibular ocular reflex exchanges head position for eye position; the eye stays on target throughout. At the end of the movement, the eye is returned to the primary position. Head movement latencies are readily modified by experimental conditions such as instructions to the subject, frequency and predictability of the target, amplitude of the movement, and development of fatigue. They are affected by neurological disease processes. Effects on head latency are mirrored by idiosyncratic or covarying changes in eye movement latency. Covariance of latency in head and eye movements is attributed to concomitant higher level neurological processing because it is sensitive to stimulus predictability and to neural fatique. These experimental results may be readily demonstrated using a gaze latency diagram. They are also illustrated in a table derived from a branching model assignment of latencies according to a hypothetical neurological schema. The potential of these coordinated gaze latency studies for neurological diagnosis is illustrated in patients with homonymous hemianopsia.  相似文献   

4.
The auditory system represents sound-source directions initially in head-centered coordinates. To program eye-head gaze shifts to sounds, the orientation of eyes and head should be incorporated to specify the target relative to the eyes. Here we test (1) whether this transformation involves a stage in which sounds are represented in a world- or a head-centered reference frame, and (2) whether acoustic spatial updating occurs at a topographically organized motor level representing gaze shifts, or within the tonotopically organized auditory system. Human listeners generated head-unrestrained gaze shifts from a large range of initial eye and head positions toward brief broadband sound bursts, and to tones at different center frequencies, presented in the midsagittal plane. Tones were heard at a fixed illusory elevation, regardless of their actual location, that depended in an idiosyncratic way on initial head and eye position, as well as on the tone's frequency. Gaze shifts to broadband sounds were accurate, fully incorporating initial eye and head positions. The results support the hypothesis that the auditory system represents sounds in a supramodal reference frame, and that signals about eye and head orientation are incorporated at a tonotopic stage.  相似文献   

5.
Eye and head movements during vestibular stimulation in the alert rabbit   总被引:2,自引:0,他引:2  
Rabbits passively oscillated in the horizontal plane with a free head tended to stabilize their head in space (re: earth-fixed surroundings) by moving the head on the trunk (neck angular deviation, NAD) opposite the passively imposed body rotation. The gain (NAD/body rotation) of head stabilization varied from 0.0 to 0.95 (nearly perfect stability) and was most commonly above 0.5. Horizontal eye movement (HEM) was inversely proportional to head-in-space stability, i.e. the gaze (sum of HEM, NAD, and body rotation) was stable in space (regardless of the gain of head stabilization). When the head was fixed to the rotating platform, attempted head movements (head torque) mimicked eye movements in both the slow and fast phases of vestibular nystagmus; tonic eye position was also accompanied by conjugate shifts in tonic head torque. Thus, while eye and head movements may at times be linked, that the slow eye and head movements vary inversely during vestibular stimulation with a free head indicates that the linkage is not rigid.Absence of a textured stationary visual field consistently produced a response termed ‘visual inattentiveness,’ which was characterized by, among other things, a reduction of head and gaze stability in space. This behavioral response could also be reproduced in a subject allowed vision during prolonged vestibular stimulation in the absence of other environmental stimuli. It is suggested that rabbits optimize gaze stability (re: stationary surroundings), with the head contributing variably, as long as the animal is attending to its surroundings.  相似文献   

6.
Combined eye and head displacements are routinely used to orient the visual axis rapidly (gaze). Humans can use a wide variety of head movement strategies. However, in the cat, comparatively limited eye motility forces a more routine and stereotyped use of head motion. Nevertheless, the same general principles of gaze control may be applicable to humans, rhesus monkeys and cats. The gaze control system can be modeled using a feedback system in which an internally created, instantaneous, gaze motor error signal--equivalent to the distance between the target and the gaze position at that time--is used to drive both eye and head motor circuits. The visual axis is moved until this error equals zero. Recent studies suggest that the superior colliculus of the cat provides brainstem eye and head motor circuits with the gaze motor error signal; such studies have led to speculation that information on ongoing gaze motion is fed back to the superior colliculus. It is still uncertain whether comparable collicular and brainstem neuronal mechanisms control gaze in the monkey.  相似文献   

7.
This work describes a technique for measuring human head movements in 3D space. Rotations and translations of the head are tracked using a light helmet fastened to a multi-joint mechanical structure. This apparatus has been designed to be used in a series of psycho-physiological experiments in the field of active vision, where position and orientation of the head need to be measured in real time with high accuracy, high reliability and minimal interference with subject movements. A geometric model is developed to recover the position information and its parameters are identified through a calibration procedure. The expected accuracy, derived on the basis of the pure geometric model and the sensor resolution, is compared with the real accuracy, obtained by performing repetitive measurements on a calibration fixture. The outcome of the comparison confirms the validity of the proposed solution which turns out to be effective in providing measurement of head position with an overall accuracy of 0.6 mm and sampling frequency above 1 kHz.  相似文献   

8.
The process of visuo-spatial updating is crucial in guiding human behaviour. While the parietal cortex has long been considered a principal candidate for performing spatial transformations, the exact underlying mechanisms are still unclear. In this study, we investigated in a patient with a right occipito-parietal lesion the ability to update the visual space during vestibularly guided saccades. To quantify the possible deficits in visual and vestibular memory processes, we studied the subject's performance in two separate memory tasks, visual (VIS) and vestibular (VES). In the VIS task, a saccade was elicited from a central fixation point to the location of a visual memorized target and in the VEST task, the saccade was elicited after whole-body rotation to the starting position thus compensating for the rotation. Finally, in an updating task (UPD), the subject had to memorize the position of a visual target then after a whole-body rotation he had to produce a saccade to the remembered visual target location in space. Our main findings was a significant hypometria in the final eye position of both VEST and UPD saccades induced during rotation to the left (contralesional) hemispace as compared to saccades induced after right (ipsilesional) rotation. Moreover, these deficits in vestibularly guided saccades correlated with deficits in vestibulo-ocular time constant, reflecting disorders in the inertial vestibular integration path. We conclude that the occipito-parietal cortex in man can provide a first stage in visuo-spatial remapping by encoding inertial head position signals during gaze orientation.  相似文献   

9.
Direction of gaze (eye angle + head angle) has been shown to be important for representing space for action, implying a crucial role of vision for spatial updating. However, blind people have no access to vision yet are able to perform goal-directed actions successfully. Here, we investigated the role of visual experience for localizing and updating targets as a function of intervening gaze shifts in humans. People who differed in visual experience (late blind, congenitally blind, or sighted) were briefly presented with a proprioceptive reach target while facing it. Before they reached to the target's remembered location, they turned their head toward an eccentric direction that also induced corresponding eye movements in sighted and late blind individuals. We found that reaching errors varied systematically as a function of shift in gaze direction only in participants with early visual experience (sighted and late blind). In the late blind, this effect was solely present in people with moveable eyes but not in people with at least one glass eye. Our results suggest that the effect of gaze shifts on spatial updating develops on the basis of visual experience early in life and remains even after loss of vision as long as feedback from the eyes and head is available.  相似文献   

10.
Eye and head movements during tracking of a smoothly moving visual target were recorded in trained monkeys. The head movement clearly followed the target, although with considerable variability from cycle to cycle. The eye stayed relatively near the primary position and moved in an apparently irregular fashion; however, the sum of eye and head, or gaze, remained accurately on target despite the irregularity of the individual eye and head movements. When compared with tracking with head fixed, head free tracking was not measurably different in accuracy. Further experiments were performed which demonstrated a role for the vestibular system in coordinating eye and head during smooth pursuit. The results of these experiments can be best explained by postulating an internal smooth pursuit command driving both eye and head movements. In the case of the eye movement, this smooth pursuit command is combined with vestibular feedback from head movement before being forwarded to eye movement centers.  相似文献   

11.
We previously reported that visuomotor activity in the superior colliculus (SC) – a key midbrain structure for the generation of rapid eye movements – preferentially encodes target position relative to the eye (Te) during low‐latency head‐unrestrained gaze shifts (DeSouza et al., 2011). Here, we trained two monkeys to perform head‐unrestrained gaze shifts after a variable post‐stimulus delay (400–700 ms), to test whether temporally separated SC visual and motor responses show different spatial codes. Target positions, final gaze positions and various frames of reference (eye, head, and space) were dissociated through natural (untrained) trial‐to‐trial variations in behaviour. 3D eye and head orientations were recorded, and 2D response field data were fitted against multiple models by use of a statistical method reported previously (Keith et al., 2009). Of 60 neurons, 17 showed a visual response, 12 showed a motor response, and 31 showed both visual and motor responses. The combined visual response field population (n = 48) showed a significant preference for Te, which was also preferred in each visual subpopulation. In contrast, the motor response field population (n = 43) showed a preference for final (relative to initial) gaze position models, and the Te model was statistically eliminated in the motor‐only population. There was also a significant shift of coding from the visual to motor response within visuomotor neurons. These data confirm that SC response fields are gaze‐centred, and show a target‐to‐gaze transformation between visual and motor responses. Thus, visuomotor transformations can occur between, and even within, neurons within a single frame of reference and brain structure.  相似文献   

12.
Single-unit activity was recorded with stereotaxically guided microelectrodes in the central thalamus of five alert cats. The animals were studied with the head either fixed or free to move in a horizontal plane. They were trained to make eye and/or head movements toward discrete visual targets presented on a screen. Unit activity was analyzed in relation to triggered and spontaneous gaze displacements with head fixed and free successively. Four groups of cells were found, all within the thalamic internal medullary lamina: 20 cells were active with eye but not head movements, 49 with head but not eye movements, 36 with head or eye movements, and 17 responding to visual stimuli in the absence of movement. The patterns of firing during gaze shifts are described. It is hypothesized that eye- or head-related units carry a signal representing gaze driving.  相似文献   

13.
The frontal eye field (FEF), in the prefrontal cortex, participates in the transformation of visual signals into saccade motor commands and in eye–head gaze control. The FEF is thought to show eye‐fixed visual codes in head‐restrained monkeys, but it is not known how it transforms these inputs into spatial codes for head‐unrestrained gaze commands. Here, we tested if the FEF influences desired gaze commands within a simple eye‐fixed frame, like the superior colliculus (SC), or in more complex egocentric frames like the supplementary eye fields (SEFs). We electrically stimulated 95 FEF sites in two head‐unrestrained monkeys to evoke 3D eye–head gaze shifts and then mathematically rotated these trajectories into various reference frames. In theory, each stimulation site should specify a specific spatial goal when the evoked gaze shifts are plotted in the appropriate frame. We found that these motor output frames varied site by site, mainly within the eye‐to‐head frame continuum. Thus, consistent with the intermediate placement of the FEF within the high‐level circuits for gaze control, its stimulation‐evoked output showed an intermediate trend between the multiple reference frame codes observed in SEF‐evoked gaze shifts and the simpler eye‐fixed reference frame observed in SC‐evoked movements. These results suggest that, although the SC, FEF and SEF carry eye‐fixed information at the level of their unit response fields, this information is transformed differently in their output projections to the eye and head controllers.  相似文献   

14.
Hietanen JK 《Neuroreport》1999,10(16):3443-3447
The effects of another person's gaze direction and head orientation on the observer's attentional processes were investigated. Subjects responded to visual, laterally presented reaction signals. The presentation of the reaction signal was preceded by a facial cue stimulus signaling a direction which was either congruent, neutral, or incongruent with the laterality of the reaction signal. A head (front and profile views) with an averted gaze affected the response times in comparison to the front view of a face with a straight gaze. In contrast, a profile view of a head with a compatible gaze direction did not result in such an effect. The results indicate that visual information from the other individual's gaze direction and head orientation is integrated, and the integrated information is fed to the brain areas subserving visual attention orienting.  相似文献   

15.
Equipment used to measure eye movements must generally be calibrated to the individual subject. Some methods of measuring eye position are quite non-linear, requiring that the system's output be linearized. We describe an approach of behavioral calibration and linearization that utilizes tracking of moving targets. The key aspect of this approach is to use tracking of horizontally moving targets to calibrate vertical eye position and vice versa. This method is especially convenient for use with cats, whose fixation of eccentric stationary targets is often unreliable. It allows one to obtain an accurate calibration without having to judge whether or not tracking saccades reliably hit the target.  相似文献   

16.
Primates orient visual gaze using different eye-head coordination strategies. To test how these strategies are formed, we trained a macaque monkey to perform 'head-only' gaze shifts looking through a 10 degrees head-fixed aperture. When we suddenly relocated this aperture 15 degrees downward, the animal could orient initial eye position toward the new aperture, but during large gaze saccades the eye was mistakenly driven back to the original (now occluded) aperture. More importantly, this was accompanied by an opposite head movement, such that gaze (although blocked) pointed correctly. We conclude that the gaze control system acquires new strategies through separate but interdependent eye-head controllers, designed primarily to ensure that gaze is placed in the correct direction.  相似文献   

17.
We investigate the role that the superior colliculus (SC) and the cerebellum might play in generating gaze shifts. The discharge of cells in the intermediate layers of the SC is tightly linked to the occurrence of saccades. Many studies have demonstrated that the cerebellum is involved in both eye and head movements. When the head is unrestrained, large amplitude gaze shifts are composed of coordinated eye and head movements. In this study, we propose that the gaze saccades system is controlled by a feedback loop between the SC and the cerebellum. The SC only encodes retinal coordinates and controls the eye displacement (to move the fovea to the target), while the cerebellum deals with the gaze programming and controls the head displacement. When a target appears in space, the buildup cells within the SC decode the target signal in the retina before the saccade onset, and input the signal of the gaze displacement to the cerebellum. The cells in the cerebellum vermis encode the initial position of the eye in the orbit. The gaze displacement is decomposed into the head amplitude and the eye amplitude within the cerebellum. There are two output signals from the cerebellum. One signal controls the head movement. The other is projected back to the SC, and forms a component of the saccade vector to control the eye movement. The sum of the vectors provided by the cerebellum and the vector provided by the burst cells in the SC indicates the direction and the amplitude of the desired movement of the eye during the saccade. We propose a cerebellum model to predict the displacements of the eye and head under the condition that the position of the target signal in the retina and the initial position of the eye in the orbit are known. The results from the model are close to that observed physiologically. We conclude that before gaze shift onset, the cerebellum may play an important role in decomposing the gaze displacement into an eye amplitude and head amplitude signal.  相似文献   

18.
The Mona Lisa effect describes the phenomenon when the eyes of a portrait appear to look at the observer regardless of the observer's position. Recently, the metaphor of a cone of gaze has been proposed to describe the range of gaze directions within which a person feels looked at. The width of the gaze cone is about five degrees of visual angle to either side of a given gaze direction. We used functional magnetic resonance imaging to investigate how the brain regions involved in gaze direction discrimination would differ between centered and decentered presentation positions of a portrait exhibiting eye contact. Subjects observed a given portrait's eyes. By presenting portraits with varying gaze directions—eye contact (0°), gaze at the edge of the gaze cone (5°), and clearly averted gaze (10°), we revealed that brain response to gaze at the edge of the gaze cone was similar to that produced by eye contact and different from that produced by averted gaze. Right fusiform gyrus and right superior temporal sulcus showed stronger activation when the gaze was averted as compared to eye contact. Gaze sensitive areas, however, were not affected by the portrait's presentation location. In sum, although the brain clearly distinguishes averted from centered gaze, a substantial change of vantage point does not alter neural activity, thus providing a possible explanation why the feeling of eye contact is upheld even in decentered stimulus positions. Hum Brain Mapp 36:619–632, 2015. © 2014 Wiley Periodicals, Inc .  相似文献   

19.
Internal senses of the position of the eye in the orbit may influence the cognitive processes that take into account gaze and limb positioning for movement or guiding actions. Neuroimaging studies have revealed eye position-dependent activity in the extrastriate visual, parietal, and frontal areas, but, at the earliest vision stage, the role of the primary visual area (V1) in these processes remains unclear. Functional MRI (fMRI) was used to investigate the effect of eye position on V1 activity evoked by a quarter-field stimulation using a visual checkerboard. We showed that the amplitude of V1 activity was modulated by the position of the eye, the activity being maximal when both the eye and head positions were aligned. Previous studies gave impetus to the emerging view that V1 activity is a cortical area in which contextual influences take place. The present study suggests that eye position may affect an early stage of visual processing.  相似文献   

20.
We investigated the effect of eye‐in‐head and head‐on‐trunk direction on heading discrimination. Participants were passively translated in darkness along linear trajectories in the horizontal plane deviating 2° or 5° to the right or left of straight‐ahead as defined by the subject's trunk. Participants had to report whether the experienced translation was to the right or left of the trunk straight‐ahead. In a first set of experiments, the head was centered on the trunk and fixation lights directed the eyes 16° either left or right. Although eye position was not correlated with the direction of translation, rightward reports were more frequent when looking right than when looking left, a shift of the point of subjective equivalence in the direction opposite to eye direction (two of the 38 participants showed the opposite effect). In a second experiment, subjects had to judge the same trunk‐referenced trajectories with head‐on‐trunk deviated 16° left. Comparison with the performance in the head‐centered paradigms showed an effect of the head in the same direction as the effect of eye eccentricity. These results can be qualitatively described by biases reflecting statistical regularities present in human behaviors such as the alignment of gaze and path. Given the known effects of gaze on auditory localization and perception of straight‐ahead, we also expect contributions from a general influence of gaze on the head‐to‐trunk reference frame transformations needed to bring motion‐related information from the head‐centered otoliths into a trunk‐referenced representation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号