首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Eye patching has revealed enhanced saccadic latencies or attention effects when orienting toward visual stimuli presented in the temporal versus nasal hemifields of humans. Such behavioral advantages have been tentatively proposed to reflect possible temporal-nasal differences in the retinotectal pathway to the superior colliculus, rather than in the retinogeniculate pathway or visual cortex. However, this has not been directly tested with physiological measures in humans. Here, we examined responses of the human superior colliculus (SC) to contralateral visual field stimulation, using high spatial resolution fMRI, while manipulating which hemifield was stimulated and orthogonally which eye was patched. The SC responded more strongly to visual stimulation when eye-patching made this stimulation temporal rather than nasal. In contrast, the lateral geniculate nucleus (LGN) plus retinotopic cortical areas V1-V3 did not show any temporal-nasal differences and differed from the SC in this respect. These results provide the first direct physiological demonstration in humans that SC shows temporal-nasal differences that LGN and early visual cortex apparently do not. This may represent a temporal hemifield bias in the strength of the retinotectal pathway, leading to a preference for the contralateral hemifield in the contralateral eye.  相似文献   

2.
Saccadic eye movements pose many challenges for stable and continuous vision, such as how information from successive fixations is amalgamated into a single precept. Here we show in humans that motion signals are temporally integrated across separate fixations, but only when the motion stimulus falls either on the same retinal region (retinotopic integration) or on different retinal positions that correspond to the same external spatial coordinates (spatiotopic integration). We used individual motion signals that were below detection threshold, implicating spatiotopic trans-saccadic integration in relatively early stages of visual processing such as the middle temporal area (MT) or V5 of visual cortex. The trans-saccadic buildup of important congruent visual information while irrelevant non-congruent information fades could provide a simple and robust strategy to stabilize perception during eye movements.  相似文献   

3.
Summary The otolith contribution and otolith-visual interaction in eye and head stabilization were investigated in alert cats submitted to sinusoidal linear accelerations in three defined directions of space: up-down (Z motion), left-right (Y motion), and forward-back (X motion). Otolith stimulation alone was performed in total darkness with stimulus frequency varying from 0.05 to 1.39 Hz at a constant half peak-to-peak amplitude of 0.145 m (corresponding acceleration range 0.0014–1.13 g) Optokinetic stimuli were provided by sinusoidally moving a pseudorandom visual pattern in the Z and Y directions, using a similar half peak-to-peak amplitude (0.145 m, i.e., 16.1°) in the 0.025–1.39 Hz frequency domain (corresponding velocity range 2.5°–141°/s). Congruent otolith-visual interaction (costimulation, CS) was produced by moving the cat in front of the earth-stationary visual pattern, while conflicting interaction was obtained by suppressing all visual motion cues during linear motion (visual stabilization method, VS, with cat and visual pattern moving together, in phase). Electromyographic (EMG) activity of antagonist neck extensor (splenius capitis) and flexor (longus capitis) muscles as well as horizontal and vertical eye movements (electrooculography, EOG) were recorded in these different experimental conditions. Results showed that otolith-neck (ONR) and otolith-ocular (OOR) responses were produced during pure otolith stimulation with relatively weak stimuli (0.036 g) in all directions tested. Both EMG and EOG response gain slightly increased, while response phase lead decreased (with respect to stimulus velocity) as stimulus frequency increased in the range 0.25–1.39 Hz. Otolith contribution to compensatory eye and neck responses increased with stimulus frequency, leading to EMG and EOG responses, which oppose the imposed displacement more and more. But the otolith system alone remained unable to produce perfect compensatory responses, even at the highest frequency tested. In contrast, optokinetic stimuli in the Z and Y directions evoked consistent and compensatory eye movement responses (OKR) in a lower frequency range (0.025–0.25 Hz). Increasing stimulus frequency induced strong gain reduction and phase lag. Oculo-neck coupling or eye-head synergy was found during optokinetic stimulation in the Z and Y directions. It was characterized by bilateral activation of neck extensors and flexors during upward and downward eye movements, respectively, and by ipsilateral activation of neck muscles during horizontal eye movements. These visually-induced neck responses seemed related to eye velocity signals. Dynamic properties of neck and eye responses were significantly improved when both inputs were combined (CS). Near perfect compensatory eye movement and neck muscle responses closely related to stimulus velocity were observed over all frequencies tested, in the three directions defined. The present study indicates that eye-head coordination processes during linear motion are mainly dependent on the visual system at low frequencies (below 0.25 Hz), with close functional coupling of OKR and eye-head synergy. The otolith system basically works at higher stimulus frequencies and triggers Synergist OOR and ONR. However, both sensorimotor subsystems combine their dynamic properties to provide better eyehead coordination in an extended frequency range and, as evidenced under VS condition, visual and otolith inputs also contribute to eye and neck responses at high and low frequency, respectively. These general laws on functional coupling of the eye and head stabilizing reflexes during linear motion are valid in the three directions tested, even though the relative weight of visual and otolith inputs may vary according to motion direction and/or kinematics.  相似文献   

4.
We have studied the effects of pursuit eye movements on the functional magnetic resonance imaging (fMRI) responses in extrastriate visual areas during visual motion perception. Echoplanar imaging of 10–12 image planes through visual cortex was acquired in nine subjects while they viewed sequences of random-dot motion. Images obtained during stimulation periods were compared with baseline images, where subjects viewed a blank field. In a subsidiary experiment, responses to moving dots, viewed under conditions of fixation or pursuit, were compared with those evoked by static dots. Eye movements were recorded with MR-compatible electro-oculographic (EOG) electrodes. Our findings show an enhanced level of activation (as indexed by blood-oxygen level-dependent contrast) during pursuit compared with fixation in two extrastriate areas. The results support earlier findings on a motion-specific area in lateral occipitotemporal cortex (human V5). They also point to a further site of activation in a region approximately 12 mm dorsal of V5. The fMRI response in V5 during pursuit is significantly enhanced. This increased response may represent additional processing demands required for the control of eye movements. Received: 16 July 1997 / Accepted: 14 October 1997  相似文献   

5.
1. We recorded from single neurons in awake, trained rhesus monkeys in a lighted environment and compared responses to stimulus movement during periods of fixation with those to motion caused by saccadic or pursuit eye movements. Neurons in the inferior pulvinar (PI), lateral pulvinar (PL), and superior colliculus were tested. 2. Cells in PI and PL respond to stimulus movement over a wide range of speeds. Some of these cells do not respond to comparable stimulus motion, or discharge only weakly, when it is generated by saccadic or pursuit eye movements. Other neurons respond equivalently to both types of motion. Cells in the superficial layers of the superior colliculus have similar properties to those in PI and PL. 3. When tested in the dark to reduce visual stimulation from the background, cells in PI and PL still do not respond to motion generated by eye movements. Some of these cells have a suppression of activity after saccadic eye movements made in total darkness. These data suggest that an extraretinal signal suppresses responses to visual stimuli during eye movements. 4. The suppression of responses to stimuli during eye movements is not an absolute effect. Images brighter than 2.0 log units above background illumination evoke responses from cells in PI and PL. The suppression appears stronger in the superior colliculus than in PI and PL. 5. These experiments demonstrate that many cells in PI and PL have a suppression of their responses to stimuli that cross their receptive fields during eye movements. These cells are probably suppressed by an extraretinal signal. Comparable effects are present in the superficial layers of the superior colliculus. These properties in PI and PL may reflect the function of the ascending tectopulvinar system.  相似文献   

6.
We studied responses of pulvinar neurons in awake cats that were allowed to execute spontaneous eye movements. Extracellular cell activity during saccades, saccade-like image shifts, and various stationary visual stimuli was recorded together with the animals' eye positions. All neurons analyzed had receptive fields that covered most of the central 80x80 degrees of the animals' visual field and did only respond to large (>20 degrees) visual stimuli. According to their response properties, recorded neurons were divided into three populations. The first group, termed "S neurons" (16%), responded when the animals performed saccades but were unresponsive to any of the visual stimuli tested. These neurons do not seem to receive a visual input that is strong enough to drive them. The second group, termed "V neurons" (51%), responded to various visual stimuli including saccade-like image motion when the eyes were stationary, but not when the animals executed saccades. V neurons therefore distinguish retinal image movements that are generated externally from internally generated image motion. Finally, "SV neurons" (31%) responded when the animals made saccades as well as to saccade-like image motion or to stationary stimuli. Although these neurons do not distinguish self-induced retinal image motion from motion generated by external stimulus movements, they must receive non-retinal motion-related input, because responses elicited by saccades had shorter latencies than responses to saccade-like stimulus movements. Only SV neurons resemble response properties of pretectal neurons that project to the pulvinar and that comprise the major subcortical visual input. The functional significance of pulvinar neuronal populations for visual and visuomotor information processing is discussed.  相似文献   

7.
Since normal human subjects can perform smooth-pursuit eye movements only in the presence of a moving target, the occurrence of these eye movements represents an ideal behavioural probe to monitor the successful processing of visual motion. It has been shown previously that subjects can execute smooth-pursuit eye movements to targets defined by luminance and colour, the first-order stimulus attributes, as well as to targets defined by derived, second-order stimulus attributes such as contrast, flicker or motion. In contrast to these earlier experiments focusing on steady-state pursuit, the present study addressed the course of pre-saccadic pursuit initiation (less than 100 ms), as this early time period is thought to represent open-loop pursuit, i.e. the eye movements are exclusively driven by visual inputs proceeding the onset of the eye movement itself. Eye movements of five human subjects tracking first- and second-order motion stimuli had been measured. The analysis of the obtained eye traces revealed that smooth-pursuit eye movements could be initiated to first-order as well as second-order motion stimuli, even before the execution of the first initial saccade. In contrast to steady-state pursuit, the initiation of pursuit was not exclusively determined by the movement of the target, but rather due to an interaction between dominant first-order and less-weighted second-order motion components. Based on our results, two conclusions may be drawn: first and specific for initiation of smooth-pursuit eye movements, we present evidence supporting the notion that initiation of pursuit reflects integration of all available visual motion information. Second and more general, our results further support the hypothesis that the visual system consists of more than one mechanism for the extraction of first-order and second-order motion.  相似文献   

8.
Functional imaging of the human lateral geniculate nucleus and pulvinar   总被引:6,自引:0,他引:6  
In the human brain, little is known about the functional anatomy and response properties of subcortical nuclei containing visual maps such as the lateral geniculate nucleus (LGN) and the pulvinar. Using functional magnetic resonance imaging (fMRI) at 3 tesla (T), collective responses of neural populations in the LGN were measured as a function of stimulus contrast and flicker reversal rate and compared with those obtained in visual cortex. Flickering checkerboard stimuli presented in alternation to the right and left hemifields reliably activated the LGN. The peak of the LGN activation was found to be on average within +/-2 mm of the anatomical location of the LGN, as identified on high-resolution structural images. In all visual areas except the middle temporal (MT), fMRI responses increased monotonically with stimulus contrast. In the LGN, the dynamic response range of the contrast function was larger and contrast gain was lower than in the cortex. Contrast sensitivity was lowest in the LGN and V1 and increased gradually in extrastriate cortex. In area MT, responses were saturated at 4% contrast. Response modulation by changes in flicker rate was similar in the LGN and V1 and occurred mainly in the frequency range between 0.5 and 7.5 Hz; in contrast, in extrastriate areas V4, V3A, and MT, responses were modulated mainly in the frequency range between 7.5 and 20 Hz. In the human pulvinar, no activations were obtained with the experimental designs used to probe response properties of the LGN. However, regions in the mediodorsal right and left pulvinar were found to be consistently activated by bilaterally presented flickering checkerboard stimuli, when subjects attended to the stimuli. Taken together, our results demonstrate that fMRI at 3 T can be used effectively to study thalamocortical circuits in the human brain.  相似文献   

9.
Summary Normal subjects were exposed to 0.26 g linear acceleration steps along the inter-aural axis whilst they fixated an earth stationary target at 110 cm distance. The stimulus evoked slow phase eye movements at a mean latency of 34 ms which attained the relative target velocity in 113 ms. In contrast, visual following with head fixed, of identical relative target motion, had significantly longer latencies and time to match target velocity. The short latency responses to linear acceleration were absent in an alabyrinthine subject. It is concluded that the otolith-ocular reflex is responsible for the short latency responses to linear head movement and functions to stabilise vision during sudden head movement before visually guided compensatory eye movements take effect.  相似文献   

10.
Summary Perceived motion may be a stimulus for anticipatory slow eye movements. To test this possibility, the production of anticipatory slow eye movements in humans was studied using apparent motion stimuli. Short range apparent motion was produced with random dot stimuli and the anticipatory slow eye movements were isolated from the smooth pursuit responses by occasionally including trials in which the random dot stimulus did not appear. Long range apparent motion was produced with subjective contour stimuli. Both short range and long range apparent motion were found to be effective stimuli for anticipatory slow eye movements. The prominence of perceived motion was altered by changing the spatiotemporal displacement intervals in the short range apparent motion stimuli. Changing the subjective contours also changed the motion percepts of the long range apparent motion stimuli. With both stimuli, the peak anticipatory slow eye velocities that were achieved decreased as the prominence of the motion percepts decreased, while the timecourse of the anticipatory responses were similar under the different conditions. These findings indicate that the expectation of perceived motion is necessary for anticipatory slow eye movements.Supported by research grant EY03387 from the National Eye Institute  相似文献   

11.
Brief movements of a large-field visual stimulus elicit short-latency tracking eye movements termed "ocular following responses" (OFRs). To address the question of whether OFRs can be elicited by purely binocular motion signals in the absence of monocular motion cues, we measured OFRs from monkeys using dichoptic motion stimuli, the monocular inputs of which were flickering gratings in spatiotemporal quadrature, and compared them with OFRs to standard motion stimuli including monocular motion cues. Dichoptic motion did elicit OFRs, although with longer latencies and smaller amplitudes. In contrast to these findings, we observed that other types of motion stimuli categorized as non-first-order motion, which is undetectable by detectors for standard luminance-defined (first-order) motion, did not elicit OFRs, although they did evoke the sensation of motion. These results indicate that OFRs can be driven solely by cortical visual motion processing after binocular integration, which is distinct from the process incorporating non-first-order motion for elaborated motion perception. To explore the nature of dichoptic motion processing in terms of interaction with monocular motion processing, we further recorded OFRs from both humans and monkeys using our novel motion stimuli, the monocular and dichoptic motion signals of which move in opposite directions with a variable motion intensity ratio. We found that monocular and dichoptic motion signals are processed in parallel to elicit OFRs, rather than suppressing each other in a winner-take-all fashion, and the results were consistent across the species.  相似文献   

12.
In natural behavioral situations, saccadic eye movements not only introduce new stimuli into V1 receptive fields, they also cause changes in the background. We recorded in awake macaque V1 using a fixation paradigm and compared evoked activity to small stimuli when the background was either static or changing as with a saccade. When a stimulus was shown on a static background, as in most previous experiments, the initial response was orientation selective and contrast was inversely correlated with response latency. When a stimulus was introduced with a background change, V1 neurons showed a qualitatively different temporal response pattern in which information about stimulus orientation and contrast was delayed. The delay in the representation of visual information was found with three different types of background change-luminance increment, luminance decrement, and a pattern change with fixed mean luminance. We also found that with a background change, V1 off responses were suppressed and had a shorter time course compared with the static-background situation. Our results suggest that the distribution of temporal changes across the visual field plays a fundamental role in determining V1 responses. In the static-background condition, temporal change in the visual input occurs only in a small portion of the visual field. In the changing-background condition, and presumably in natural vision, temporal changes are widely distributed. Thus a delayed representation of visual information may be more representative of natural visual situations.  相似文献   

13.
We have investigated how visual motion signals are integrated for smooth pursuit eye movements by measuring the initiation of pursuit in monkeys for pairs of moving stimuli of the same or differing luminance. The initiation of pursuit for pairs of stimuli of the same luminance could be accounted for as a vector average of the responses to the two stimuli singly. When stimuli comprised two superimposed patches of moving dot textures, the brighter stimulus suppressed the inputs from the dimmer stimulus, so that the initiation of pursuit became winner-take-all when the luminance ratio of the two stimuli was 8 or greater. The dominance of the brighter stimulus could be not attributed to either the latency difference or the ratio of the eye accelerations for the bright and dim stimuli presented singly. When stimuli comprised either spot targets or two patches of dots moving across separate locations in the visual field, the brighter stimulus had a much weaker suppressive influence; the initiation of pursuit could be accounted for by nearly equal vector averaging of the responses to the two stimuli singly. The suppressive effects of the brighter stimulus also appeared in human perceptual judgments, but again only for superimposed stimuli. We conclude that one locus of the interaction of two moving visual stimuli is shared by perception and action and resides in local inhibitory connections in the visual cortex. A second locus resides deeper in sensory-motor processing and may be more closely related to action selection than to stimulus selection.  相似文献   

14.
The ability to perceive a stable visual environment despite eye movements and the resulting displacement of the retinal image is a striking feature of visual perception. In order to study the brain mechanism related to this phenomenon, an EEG was recorded from 30 electrodes spaced over the occipital, temporal and parietal brain areas while stationary or moving visual stimuli with velocities between 178 degrees/s and 533 degrees/s were presented. The visual stimuli were presented both during saccadic eye movements and with stationary eyes. Stimulus-related potentials were measured, and the effects of absolute and relative stimulus velocity were analyzed. Healthy adults participated in the experiments. In all 36 subjects and experimental conditions, four potential components were found with mean latencies of about 70, 140, 220 and 380 ms. The latency of the two largest components between 100 and 240 ms decreased while field strength increased with higher absolute stimulus velocity for both stationary and moving eyes, whereas relative stimulus velocity had no effect on amplitude, latency and topography of the visual evoked potential (VEP) components. If the visual system uses retinal motion information only, we would expect a dependence upon relative velocity. Since field strength and latency of the components were independent of eye movements but dependent upon absolute stimulus velocity, the visual cortex must use extraretinal information to extract stimulus velocity. This was confirmed by the fact that significant topographic changes were observed when brain activity evoked during saccades and with stationary eyes was compared. In agreement with the reafference principle, the findings indicate that the same absolute visual stimulus activates different neuronal elements during saccades than during fixation.  相似文献   

15.
We examined the effects of stimulus size and location on the mouse optokinetic response (OKR). To this end, we recorded initial OKRs elicited by a brief presentation of horizontally moving grating patterns of different vertical widths and locations in the visual field. Large-field stimuli generated large sustained OKRs, whereas visual stimuli of narrower vertical widths elicited weaker sustained responses at the later period (400–500 ms after the onset of stimulus motion). However, even stimuli of only 5° vertical width elicited detectable transient responses at the initial open-loop period (100–200 ms after the onset of stimulus motion). Presenting 5°-width stimuli at different vertical locations (−10° to +35° relative to the horizon) revealed the spatial distribution of optokinetic sensitivity across the retina. The most sensitive part of the visual field was located at +25°. In addition, we examined the vertical orientation of the eye under our stereotaxic set-up. We observed the optic disc using a hand-held fundus camera and determined the ocular orientation. All eye orientations were distributed in the range of +20–30° relative to the horizon (25.2±2.5°). Thus, the direction of the most sensitive visual field matched the angle of eye orientation. These findings indicate that the spatial distribution of visual field sensitivity to optokinetic stimuli coincides with the distribution of retinal ganglion cell density.  相似文献   

16.
1. This study investigates the contribution of the optic tectum in encoding the metric and kinetic properties of saccadic head movements. We describe the dependence of head movement components (size, direction, and speed) on parameters of focal electrical stimulation of the barn owl's optic tectum. The results demonstrate that both the site and the amount of activity can influence head saccade metrics and kinetics. 2. Electrical stimulation of the owl's optic tectum elicited rapid head movements that closely resembled natural head movements made in response to auditory and visual stimuli. The kinetics of these movements were similar to those of saccadic eye movements in primates. 3. The metrics and kinetics of head movements evoked from any given site depended strongly on stimulus parameters. Movement duration increased with stimulus duration, as did movement size. Both the size and the maximum speed of the movement increased to a plateau value with current strength and pulse rate. Movement direction was independent of stimulus parameters. 4. The initial position of the head influenced the size, direction, and speed of movements evoked from any given site: when the owl initially faced away from the direction of the induced saccade, the movement was larger and faster than when the owl initially faced toward the direction of the induced movement. 5. A characteristic movement of particular size, direction, and speed could be defined for each site by the use of stimulation parameters that elicited plateau movements with normal kinetic profiles and by having the head initially centered on the body. The size, direction, and speed of these characteristic movements varied systematically with the site of stimulation across the tectum. The map of head movement vector (size and direction) was aligned with the sensory representations of visual and auditory space, such that the movement elicited from a given site when the owl initially faced straight ahead brought the owl to face that region of space represented by the sensory responses of the neurons at the site of stimulation. 6. The results imply that both the site and the amount of neural activity in the optic tectum contribute to encoding the metrics and kinetics of saccadic movements. A comparison of the present findings with previous studies on saccadic eye movements in primates and combined eye and head movements in cats suggests striking similarities in the ways in which tectal activity specifies a redirection in gaze to such dissimilar motor effectors as the eyes and head.  相似文献   

17.
The way in which input noise perturbs the behavior of a system depends on the internal processing structure of the system. In visual psychophysics, there is a long tradition of using external noise methods (i.e., adding noise to visual stimuli) as tools for system identification. Here, we demonstrate that external noise affects processing of visual scenes at different cortical areas along the human ventral visual pathway, from retinotopic regions to higher occipitotemporal areas implicated in visual shape processing. We found that when the contrast of the stimulus was held constant, the further away from the retinal input a cortical area was the more its activity, as measured with functional magnetic resonance imaging (fMRI), depended on the signal-to-noise ratio (SNR) of the visual stimulus. A similar pattern of results was observed when trials with correct and incorrect responses were analyzed separately. We interpret these findings by extending signal detection theory to fMRI data analysis. This approach reveals the sequential ordering of decision stages in the cortex by exploiting the relation between fMRI response and stimulus SNR. In particular, our findings provide novel evidence that occipitotemporal areas in the ventral visual pathway form a cascade of decision stages with increasing degree of signal uncertainty and feature invariance.  相似文献   

18.
What is the relationship between retinotopy and object selectivity in human lateral occipital (LO) cortex? We used functional magnetic resonance imaging (fMRI) to examine sensitivity to retinal position and category in LO, an object-selective region positioned posterior to MT along the lateral cortical surface. Six subjects participated in phase-encoded retinotopic mapping experiments as well as block-design experiments in which objects from six different categories were presented at six distinct positions in the visual field. We found substantial position modulation in LO using standard nonobject retinotopic mapping stimuli; this modulation extended beyond the boundaries of visual field maps LO-1 and LO-2. Further, LO showed a pronounced lower visual field bias: more LO voxels represented the lower contralateral visual field, and the mean LO response was higher to objects presented below fixation than above fixation. However, eccentricity effects produced by retinotopic mapping stimuli and objects differed. Whereas LO voxels preferred a range of eccentricities lying mostly outside the fovea in the retinotopic mapping experiment, LO responses were strongest to foveally presented objects. Finally, we found a stronger effect of position than category on both the mean LO response, as well as the distributed response across voxels. Overall these results demonstrate that retinal position exhibits strong effects on neural response in LO and indicates that these position effects may be explained by retinotopic organization.  相似文献   

19.
Attending to a visual or auditory stimulus often requires irrelevant information to be filtered out, both within the modality attended and in other modalities. For example, attentively listening to a phone conversation can diminish our ability to detect visual events. We used functional magnetic resonance imaging (fMRI) to examine brain responses to visual and auditory stimuli while subjects attended visual or auditory information. Although early cortical areas are traditionally considered unimodal, we found that brain responses to the same ignored information depended on the modality attended. In early visual area V1, responses to ignored visual stimuli were weaker when attending to another visual stimulus, compared with attending to an auditory stimulus. The opposite was true in more central visual area MT+, where responses to ignored visual stimuli were weaker when attending to an auditory stimulus. Furthermore, fMRI responses to the same ignored visual information depended on the location of the auditory stimulus, with stronger responses when the attended auditory stimulus shared the same side of space as the ignored visual stimulus. In early auditory cortex, responses to ignored auditory stimuli were weaker when attending a visual stimulus. A simple parameterization of our data can describe the effects of redirecting attention across space within the same modality (spatial attention) or across modalities (cross-modal attention), and the influence of spatial attention across modalities (cross-modal spatial attention). Our results suggest that the representation of unattended information depends on whether attention is directed to another stimulus in the same modality or the same region of space.  相似文献   

20.
Signals related to eye position are essential for visual perception and eye movements, and are powerful modulators of sensory responses in many regions of the visual and oculomotor systems. We show that visual and pre-saccadic responses of frontal eye field (FEF) neurons are modulated by initial eye position in a way suggestive of a multiplicative mechanism (gain field). Furthermore the slope of the eye position sensitivity tends to be negatively correlated with preferred retinal position across the population. A model with Gaussian visual receptive fields and linear-rectified eye position gain fields accounts for a large portion of the variance in the recorded data. Using physiologically derived parameters, this model is able to subtract the gaze shift from the vector representing the retinal location of the target. This computation might be used to maintain a memory of target location in space during ongoing eye movements. This updated spatial memory can be read directly from the locus of the peak of activity across the retinotopic map of FEF and it is the result of a vector subtraction between retinal target location when flashed and subsequent eye displacement in the dark.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号