首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
We present axial resolution calculated using a mathematical model of the adaptive optics scanning laser ophthalmoscope (AOSLO). The peak intensity and the width of the axial intensity response are computed with the residual Zernike coefficients after the aberrations are corrected using adaptive optics for eight subjects and compared with the axial resolution of a diffraction-limited eye. The AOSLO currently uses a confocal pinhole that is 80 microm, or 3.48 times the width of the Airy disk radius of the collection optics, and projects to 7.41 microm on the retina. For this pinhole, the axial resolution of a diffraction-limited system is 114 microm and the computed axial resolution varies between 120 and 146 microm for the human subjects included in this study. The results of this analysis indicate that to improve axial resolution, it is best to reduce the pinhole size. The resulting reduction in detected light may demand, however, a more sophisticated adaptive optics system. The study also shows that imaging systems with large pinholes are relatively insensitive to misalignment in the lateral positioning of the confocal pinhole. However, when small pinholes are used to maximize resolution, alignment becomes critical.  相似文献   

2.
Summary Motion of background visual images across the retina during slow tracking eye movements is usually not consciously perceived so long as the retinal image motion results entirely from the voluntary slow eye movement (otherwise the surround would appear to move during pursuit eye movements). To address the question of where in the brain such filtering might occur, the responses of cells in 3 visuo-cortical areas of macaque monkeys were compared when retinal image motion of background images was caused by object motion as opposed to a pursuit eye movement. While almost all cells in areas V4 and MT responded indiscriminately to retinal image motion arising from any source, most of those recorded in the dorsal zone of area MST (MSTd), as well as a smaller proportion in lateral MST (MST1), responded preferentially to externally-induced motion and only weakly or not at all to self-induced visual motion. Such cells preserve visuo-spatial stability during low-velocity voluntary eye movements and could contribute to the process of providing consistent spatial orientation regardless of whether the eyes are moving or stationary.  相似文献   

3.
 Spreading depression (SD) of electroencephalographic activity is a dynamic wave phenomenon in the central nervous system (CNS). The retina, especially the isolated chicken retina, is an excellent constituent of the CNS in which to observe the dynamic behavior of the SD wave fronts, because it changes its optical properties during a SD attack. The waves become visible as milky fronts on a black background. It is still controversial what the basic mechanistic steps of SD are, but certainly SD belongs to the self-organization phenomena occurring in neuronal tissue. In this work, spiral-shaped wave fronts are analyzed using digital video imaging techniques. We report how the inner end of the wave front, the spiral tip, breaks away repeatedly. This separation process is associated with a Z-shaped trajectory (extension ∼1.2 mm) that is described by the tip over one spiral revolution (period 2.45±0.1 min). The Z-shaped trajectory does not remain fixed, but performs a complex motion across the retina with each period. This is the first time, to our knowledge, that established imaging methods have been applied to the study of the two-dimensional features of SD wave propagation and to obtaining quantitative data of their dynamics. Since these methods do not interfere with the tissue, it is possible to observe the intrinsic properties of the phenomenon without any external influence. Received: 1 August 1996 / Accepted: 5 November 1996  相似文献   

4.
When a person tracks a small moving object, the visual images in the background of the visual scene move across his/her retina. It, however, is possible to estimate the actual motion of the images despite the eye-movement-induced motion. To understand the neural mechanism that reconstructs a stable visual world independent of eye movements, we explored areas MT (middle temporal) and MST (medial superior temporal) in the monkey cortex, both of which are known to be essential for visual motion analysis. We recorded the responses of neurons to a moving textured image that appeared briefly on the screen while the monkeys were performing smooth pursuit or stationary fixation tasks. Although neurons in both areas exhibited significant responses to the motion of the textured image with directional selectivity, the responses of MST neurons were mostly correlated with the motion of the image on the screen independent of pursuit eye movement, whereas the responses of MT neurons were mostly correlated with the motion of the image on the retina. Thus these MST neurons were more likely than MT neurons to distinguish between external and self-induced motion. The results are consistent with the idea that MST neurons code for visual motion in the external world while compensating for the counter-rotation of retinal images due to pursuit eye movements.  相似文献   

5.
Primates can generate accurate, smooth eye-movement responses to moving target objects of arbitrary shape and size, even in the presence of complex backgrounds and/or the extraneous motion of non-target objects. Most previous studies of pursuit have simply used a spot moving over a featureless background as the target and have thus neglected critical issues associated with the general problem of recovering object motion. Visual psychophysicists and theoreticians have shown that, for arbitrary objects with multiple features at multiple orientations, object-motion estimation for perception is a complex, multi-staged, time-consuming process. To examine the temporal evolution of the motion signal driving pursuit, we recorded the tracking eye movements of human observers to moving line-figure diamonds. We found that pursuit is initially biased in the direction of the vector average of the motions of the diamond's line segments and gradually converges to the true object-motion direction with a time constant of approximately 90 ms. Furthermore, transient blanking of the target during steady-state pursuit induces a decrease in tracking speed, which, unlike pursuit initiation, is subsequently corrected without an initial direction bias. These results are inconsistent with current models in which pursuit is driven by retinal-slip error correction. They demonstrate that pursuit models must be revised to include a more complete visual afferent pathway, which computes, and to some extent latches on to, an accurate estimate of object direction over the first hundred milliseconds or so of motion.  相似文献   

6.
近红外光散射成像系统研究   总被引:1,自引:0,他引:1  
本文介绍的近红外光散射成像系统是一种用于乳腺癌检测的频域光学成像设备,采用射频幅度调制近红外光透射成像。成像系统包括近红外激光发射电路、二维扫描装置、光接收及放大电路、信号采集板及微机系统,整个检测过程由微机控制。为了提高光在强散射组织中的穿透能力并减小背景噪声,入射光被分别进行50MHz和1600Hz调制。仿体实验表明,在0.25%的Intralipid仿体溶液中,单个或两个吸收体分辨率为直径小于10mm;在猪肉脂肪组织中,单吸收体分辩率为直径8mm。  相似文献   

7.
We present a statistical assessment of the lateral resolution of the adaptive optics scanning laser ophthalmoscope (AOSLO). We adopt a 2-D Gaussian function to approximate the AOSLO point spread function (PSF), which is dominated by the residual wavefront aberration and characterized by the Strehl ratio. Thus, we derive the lateral resolution in the presence of residual wave aberrations, which is inversely proportional to square root of the Strehl ratio. The modeling, while not sufficient in describing the fine structure of the real PSF, demonstrates good conformance to the lateral cross section of the real PSF. With this model, the lateral resolution of our current AOSLO was computed to be 1.65 to 2.33 microm, which agreed well with the actual result. We also reveal the relationships among the lateral resolution and other three measures of the AOSLO imaging property including the Strehl ratio, the PSF, and the root mean square (rms) of wavefront aberration.  相似文献   

8.
Management of respiratory motion during radiation therapy requires treatment planning and simulation using imaging modalities that possess sufficient spatio-temporal accuracy and precision. An investigation into the use of a novel ultrasound (US) imaging system for assessment of respiratory motion is presented, exploiting its good soft tissue contrast and temporal precision. The system dynamically superimposes the appropriate image plane sampled from a reference CT data set with the corresponding US B-mode image. An articulating arm is used for spatial registration. While the focus of the study was to quantify the system's ability to track respiratory motion, certain unique spatial calibration procedures were devised that render the software potentially valuable to the general research community. These include direct access to all transformation matrix elements and image scaling factors, a manual latency correction function, and a three-point spatial registration procedure that allows the system to be used in any room possessing a traditional radiotherapy laser localization system. Counter-intuitively, it was discovered that a manual procedure for calibrating certain transformation matrix elements produced superior accuracy to that of an algorithmic Levenberg-Marquardt optimization method. The absolute spatial accuracy was verified by comparing the physical locations of phantom test objects measured using the spatially registered US system, and using data from a 3DCT scan of the phantom as a reference. The spatial accuracy of the display superposition was also tested in a similar manner. The system's dynamic properties were then assessed using three methods. First, the overall system response time was studied using a programmable motion phantom. This included US video update, articulating arm update, CT data set resampling, and image display. The next investigation verified the system's ability to measure the range of motion of a moving anatomical test phantom that possessed both high and low contrast test objects. Finally, the system's performance was compared to that of a four-dimensional CT (4DCT) data set. The absolute spatial and display superposition accuracy was found to be better than 2 mm and typically 1 mm. Overall dynamic system response was adequate to produce a mean relative positional error of less than 1 mm if an empiric latency correction of 3 video frames was incorporated. The dynamic CT/US display mode was able to assess phantom motion for both high and low contrast test objects to within 1 mm, and compared favorably to the 4DCT data. The 4DCT movie loop accurately assessed the target motion for both of the high and low contrast objects tested, but the minimum intensity and average intensity reconstructions did not. This investigation demonstrated that this US system possesses sufficient spatio-temporal accuracy to properly assess respiratory motion. Future work will seek to demonstrate efficacy in its clinical application to respiratory motion assessment, particularly for sites in the upper abdomen, where low tissue contrast is evident.  相似文献   

9.
During linear accelerations, compensatory reflexes should continually occur in order to maintain objects of visual interest as stable images on the retina. In the present study, the three-dimensional organization of the vestibulo-ocular reflex in pigeons was quantitatively examined during linear accelerations produced by constant velocity off-vertical axis yaw rotations and translational motion in darkness. With off-vertical axis rotations, sinusoidally modulated eye-position and velocity responses were observed in all three components, with the vertical and torsional eye movements predominating the response. Peak torsional and vertical eye positions occurred when the head was oriented with the lateral visual axis of the right eye directed orthogonal to or aligned with the gravity vector, respectively. No steady-state horizontal nystagmus was obtained with any of the rotational velocities (8–58°/s) tested. During translational motion, delivered along or perpendicular to the lateral visual axis, vertical and torsional eye movements were elicited. No significant horizontal eye movements were observed during lateral translation at frequencies up to 3 Hz. These responses suggest that, in pigeons, all linear accelerations generate eye movements that are compensatory to the direction of actual or perceived tilt of the head relative to gravity. In contrast, no translational horizontal eye movements, which are known to be compensatory to lateral translational motion in primates, were observed under the present experimental conditions. Received: 29 January 1999 / Accepted: 14 June 1999  相似文献   

10.
The responses of visual movement-sensitive neurons in the anterior superior temporal polysensory area (STPa) of monkeys were studied during object-motion, ego-motion and during both together. The majority of the cells responded only to the image of a moving object against a stationary background and failed to respond to the retinal movement of the same object (against the same background) caused by the monkey's ego-motion. All the tested cells continued responding to the object-motion during ego-motion in the opposite direction. By contrast, most cells failed to respond to the motion of an object when the observer and object moved at the same speed and direction (eliminating observer-relative motion cues). The results indicate that STPa cells compute motion relative to the observer and suggest an influence of reference signals (vestibular, somatosensory or retinal) in the discrimination of ego- and object-motion. The results extend observations indicating that STPa cells are selective for visual motion originating from the movements of external objects and unresponsive to retinal changes correlated with the observer's own movements.  相似文献   

11.
Movement of the body, head, or eyes with respect to the world creates one of the most common yet complex situations in which the visuomotor system must localize objects. In this situation, vestibular, proprioceptive, and extra-retinal information contribute to accurate visuomotor control. The utility of retinal motion information, on the other hand, is questionable, since a single pattern of retinal motion can be produced by any number of head or eye movements. Here we investigated whether retinal motion during a smooth pursuit eye movement contributes to visuomotor control. When subjects pursued a moving object with their eyes and reached to the remembered location of a separate stationary target, the presence of a moving background significantly altered the endpoints of their reaching movements. A background that moved with the pursuit, creating a retinally stationary image (no retinal slip), caused the endpoints of the reaching movements to deviate in the direction of pursuit, overshooting the target. A physically stationary background pattern, however, producing retinal image motion opposite to the direction of pursuit, caused reaching movements to become more accurate. The results indicate that background retinal motion is used by the visuomotor system in the control of visually guided action.  相似文献   

12.
Apparent velocities of moving visual stimuli are known to be different depending on whether the subject pursues the stimulus (efferently controlled motion perception) or whether the eye is stationary and the image moves across the retina (afferent motion perception). Afferent motion perception of a periodic pattern or a moving single object causes overestimation of velocity (magnitude estimations) as compared to smooth pursuit. This socalled Aubert-Fleischl phenomenon is shown to depend on local temporal frequency stimulation on the retina caused by the repetitive passage of contrast borders of the moving periodic pattern. This is evidenced by the fact that for a given stimulus speed the amount of overestimation is a function of the spatial frequency of the pattern (or the angular subtend of a single moving object) and that the Aubert-Fleischl phenomenon is not observed if a single edge moves. Background characteristics seem not to influence the apparent velocity during smooth pursuit.  相似文献   

13.
To perceive the relative positions of objects in the visual field, the visual system must assign locations to each stimulus. This assignment is determined by the object's retinal position, the direction of gaze, eye movements, and the motion of the object itself. Here we show that perceived location is also influenced by motion signals that originate in distant regions of the visual field. When a pair of stationary lines are flashed, straddling but not overlapping a rotating radial grating, the lines appear displaced in a direction consistent with that of the grating's motion, even when the lines are a substantial distance from the grating. The results indicate that motion's influence on position is not restricted to the moving object itself, and that even the positions of stationary objects are coded by mechanisms that receive input from motion-sensitive neurons.  相似文献   

14.
To localize a seen object, the CNS has to integrate the object's retinal location with the direction of gaze. Here we investigate this process by examining the localization of static objects during smooth pursuit eye movements. The normally experienced stability of the visual world during smooth pursuit suggests that the CNS essentially compensates for the eye movement when judging target locations. However, certain systematic localization errors are made, and we use these to study the process of sensorimotor integration. During an eye movement, a static object's image moves across the retina. Objects that produce retinal slip are known to be mislocalized: objects moving toward the fovea are seen too far on in their trajectory, whereas errors are much smaller for objects moving away from the fovea. These effects are usually studied by localizing the moving object relative to a briefly flashed one during fixation: moving objects are then mislocalized, but flashes are not. In our first experiment, we found that a similar differential mislocalization occurs for static objects relative to flashes during pursuit. This effect is not specific for horizontal pursuit but was also found in other directions. In a second experiment, we examined how this effect generalizes to positions outside the line of eye movement. We found that large localization errors were found in the entire hemifield ahead of the pursuit target and were predominantly aligned with the direction of eye movement. In a third experiment, we determined whether it is the flash or the static object that is mislocalized ahead of the pursuit target. In contrast to fixation conditions, we found that during pursuit it is the flash, not the static object, which is mislocalized. In a fourth experiment, we used egocentric localization to confirm this result. Our results suggest that the CNS compensates for the retinal localization errors to maintain position constancy for static objects during pursuit. This compensation is achieved in the process of sensorimotor integration of retinal and gaze signals: different retinal areas are integrated with different gaze signals to guarantee the stability of the visual world.  相似文献   

15.
It is still a matter of debate whether the control of smooth pursuit eye movements involves an internal drive signal from object motion perception. We measured human target velocity and target position perceptions and compared them with the presumed pursuit control mechanism (model simulations). We presented normal subjects (Ns) and vestibular loss patients (Ps) with visual target motion in space. Concurrently, a visual background was presented, which was kept stationary or was moved with or against the target (five combinations). The motion stimuli consisted of smoothed ramp displacements with different dominant frequencies and peak velocities (0.05, 0.2, 0.8 Hz; 0.2–25.6°/s). Subjects always pursued the target with their eyes. In a first experiment they gave verbal magnitude estimates of perceived target velocity in space and of self-motion in space. The target velocity estimates of both Ns and Ps tended to saturate at 0.8 Hz and with peak velocities >3°/s. Below these ranges the velocity estimates showed a pronounced modulation in relation to the relative target-to-background motion ('background effect'; for example, 'background with'-motion decreased and 'against'-motion increased perceived target velocity). Pronounced only in Ps and not in Ns, there was an additional modulation in relation to the relative head-to-background motion, which co-varied with an illusion of self-motion in space (circular vection, CV) in Ps. In a second experiment, subjects performed retrospective reproduction of perceived target start and end positions with the same stimuli. Perceived end position was essentially veridical in both Ns and Ps (apart from a small constant offset). Reproduced start position showed an almost negligible background effect in Ns. In contrast, it showed a pronounced modulation in Ps, which again was related to CV. The results were compared with simulations of a model that we have recently presented for velocity control of eye pursuit. We found that the main features of target velocity perception (in terms of dynamics and modulation by background) closely correspond to those of the internal drive signal for target pursuit, compatible with the notion of a common source of both the perception and the drive signal. In contrast, the eye pursuit movement is almost free of the background effect. As an explanation, we postulate that the target-to-background component in the target pursuit drive signal largely neutralises the background-to-eye retinal slip signal (optokinetic reflex signal) that feeds into the eye premotor mechanism as a competitor of the target retinal slip signal. An extension of the model allowed us to simulate also the findings of the target position perception. It is assumed to be represented in a perceptual channel that is distinct from the velocity perception, building on an efference copy of the essentially accurate eye position. We hold that other visuomotor behaviour, such as target reaching with the hand, builds mainly on this target position percept and therefore is not contaminated by the background effect in the velocity percept. Generally, the coincidence of an erroneous velocity percept and an almost perfect eye pursuit movement during background motion is discussed as an instructive example of an action-perception dissociation. This dissociation cannot be taken to indicate that the two functions are internally represented in separate brain control systems, but rather reflects the intimate coupling between both functions. Electronic Publication  相似文献   

16.
The eye movements we make to look at objects require that the spatial information contained in the objects image on the retina be used to generate a motor command. This process is known as sensorimotor transformation and has been generally addressed using simple point targets. Here, we investigate the sensorimotor transformation involved in planning double saccade sequences directed at one or two objects. Using both visually guided saccades toward stationary objects and objects subjected to intrasaccadic displacements, and memory-guided saccades, we found that the coordinate transformations required to program the second saccade were different for saccades aimed at a new target object and saccades that scanned the same object. While saccades aimed at a new object were updated on the basis of the actual eye position, those that scanned the same object were performed with a fixed amplitude, irrespective of the actual eye position. Our findings demonstrate that different abstract representations of space are used in sensory-to-motor transformations, depending on what action is planned on the objects.  相似文献   

17.
Pursuing an object with smooth eye movements requires an accurate estimate of its two-dimensional (2D) trajectory. This 2D motion computation requires that different local motion measurements are extracted and combined to recover the global object-motion direction and speed. Several combination rules have been proposed such as vector averaging (VA), intersection of constraints (IOC), or 2D feature tracking (2DFT). To examine this computation, we investigated the time course of smooth pursuit eye movements driven by simple objects of different shapes. For type II diamond (where the direction of true object motion is dramatically different from the vector average of the 1-dimensional edge motions, i.e., VA not equal IOC = 2DFT), the ocular tracking is initiated in the vector average direction. Over a period of less than 300 ms, the eye-tracking direction converges on the true object motion. The reduction of the tracking error starts before the closing of the oculomotor loop. For type I diamonds (where the direction of true object motion is identical to the vector average direction, i.e., VA = IOC = 2DFT), there is no such bias. We quantified this effect by calculating the direction error between responses to types I and II and measuring its maximum value and time constant. At low contrast and high speeds, the initial bias in tracking direction is larger and takes longer to converge onto the actual object-motion direction. This effect is attenuated with the introduction of more 2D information to the extent that it was totally obliterated with a texture-filled type II diamond. These results suggest a flexible 2D computation for motion integration, which combines all available one-dimensional (edge) and 2D (feature) motion information to refine the estimate of object-motion direction over time.  相似文献   

18.
1. The "descending contralateral movement detector" (DCMD) neuron in the locust has been challenged with a variety of moving stimuli, including scenes from a film (Star Wars), moving disks, and images generated by computer. The neuron responds well to any rapid movement. For a dark object moving along a straight path at a uniform velocity, the DCMD gives the strongest response when the object travels directly toward the eye, and the weakest when the object travels away from the eye. Instead of expressing selectivity for movements of small rather than large objects, the DCMD responds preferentially to approaching objects. 2. The neuron shows a clear selectivity for approach over recession for a variety of sizes and velocities of movement both of real objects and in simulated movements. When a disk that subtends > or = 5 degrees at the eye approaches the eye, there are two peaks in spike rate: one immediately after the start of movement; and a second that builds up during the approach. When a disk recedes from the eye, there is a single peak in response as the movement starts. There is a good correlation between spike rate and angular acceleration of the edges of the image over the eye. 3. When an object approaches from a distance sufficient for it to subtend less than one interommatidial angle at the start of its approach, there is a single peak in response. The DCMD tracks the approach, and, if the object moves at 1 m/s or faster, the spike rate increases throughout the duration of object movement. The size of the response depends on the speed of approach. 4. It is unlikely that the DCMD encodes the time to collision accurately, because the response depends on the size as well as the velocity of an approaching object. 5. Wide-field movements suppress the response to an approaching object. The suppression varies with the temporal frequency of the background pattern. 6. Over a wide range of contrasts of object against background, the DCMD gives a stronger response to approaching than to receding objects. For low contrasts, the selectivity is greater for objects that are darker than the background than for objects that are lighter.  相似文献   

19.
Imaging of the human microcirculation in real-time has the potential to detect injuries and illnesses that disturb the microcirculation at earlier stages and may improve the efficacy of resuscitation. Despite advanced imaging techniques to monitor the microcirculation, there are currently no tools for the near real-time analysis of the videos produced by these imaging systems. An automated system tool that can extract microvasculature information and monitor changes in tissue perfusion quantitatively might be invaluable as a diagnostic and therapeutic endpoint for resuscitation. The experimental algorithm automatically extracts microvascular network and quantitatively measures changes in the microcirculation. There are two main parts in the algorithm: video processing and vessel segmentation. Microcirculatory videos are first stabilized in a video processing step to remove motion artifacts. In the vessel segmentation process, the microvascular network is extracted using multiple level thresholding and pixel verification techniques. Threshold levels are selected using histogram information of a set of training video recordings. Pixel-by-pixel differences are calculated throughout the frames to identify active blood vessels and capillaries with flow. Sublingual microcirculatory videos are recorded from anesthetized swine at baseline and during hemorrhage using a hand-held Side-stream Dark Field (SDF) imaging device to track changes in the microvasculature during hemorrhage. Automatically segmented vessels in the recordings are analyzed visually and the functional capillary density (FCD) values calculated by the algorithm are compared for both health baseline and hemorrhagic conditions. These results were compared to independently made FCD measurements using a well-known semi-automated method. Results of the fully automated algorithm demonstrated a significant decrease of FCD values. Similar, but more variable FCD values were calculated using a commercially available software program requiring manual editing. An entirely automated system for analyzing microcirculation videos to reduce human interaction and computation time is developed. The algorithm successfully stabilizes video recordings, segments blood vessels, identifies vessels without flow and calculates FCD in a fully automated process. The automated process provides an equal or better separation between healthy and hemorrhagic FCD values compared to currently available semi-automatic techniques. The proposed method shows promise for the quantitative measurement of changes occurring in microcirculation during injury.  相似文献   

20.
This paper presents comprehensive and depth-resolved retinal microvasculature images within human retina achieved by a newly developed ultrahigh sensitive optical microangiography (UHS-OMAG) system. Due to its high flow sensitivity, UHS-OMAG is much more sensitive to tissue motion due to the involuntary movement of the human eye and head compared to the traditional OMAG system. To mitigate these motion artifacts on final imaging results, we propose a new phase compensation algorithm in which the traditional phase-compensation algorithm is repeatedly used to efficiently minimize the motion artifacts. Comparatively, this new algorithm demonstrates at least 8 to 25 times higher motion tolerability, critical for the UHS-OMAG system to achieve retinal microvasculature images with high quality. Furthermore, the new UHS-OMAG system employs a high speed line scan CMOS camera (240 kHz A-line scan rate) to capture 500 A-lines for one B-frame at a 400 Hz frame rate. With this system, we performed a series of in vivo experiments to visualize the retinal microvasculature in humans. Two featured imaging protocols are utilized. The first is of the low lateral resolution (16 μm) and a wide field of view (4?×?3 mm(2) with single scan and 7?×?8 mm(2) for multiple scans), while the second is of the high lateral resolution (5 μm) and a narrow field of view (1.5?×?1.2 mm(2) with single scan). The great imaging performance delivered by our system suggests that UHS-OMAG can be a promising noninvasive alternative to the current clinical retinal microvasculature imaging techniques for the diagnosis of eye diseases with significant vascular involvement, such as diabetic retinopathy and age-related macular degeneration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号