首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Capturing mitochondria’s intricate and dynamic structure poses a daunting challenge for optical nanoscopy. Different labeling strategies have been demonstrated for live-cell stimulated emission depletion (STED) microscopy of mitochondria, but orthogonal strategies are yet to be established, and image acquisition has suffered either from photodamage to the organelles or from rapid photobleaching. Therefore, live-cell nanoscopy of mitochondria has been largely restricted to two-dimensional (2D) single-color recordings of cancer cells. Here, by conjugation of cyclooctatetraene (COT) to a benzo-fused cyanine dye, we report a mitochondrial inner membrane (IM) fluorescent marker, PK Mito Orange (PKMO), featuring efficient STED at 775 nm, strong photostability, and markedly reduced phototoxicity. PKMO enables super-resolution (SR) recordings of IM dynamics for extended periods in immortalized mammalian cell lines, primary cells, and organoids. Photostability and reduced phototoxicity of PKMO open the door to live-cell three-dimensional (3D) STED nanoscopy of mitochondria for 3D analysis of the convoluted IM. PKMO is optically orthogonal with green and far-red markers, allowing multiplexed recordings of mitochondria using commercial STED microscopes. Using multi-color STED microscopy, we demonstrate that imaging with PKMO can capture interactions of mitochondria with different cellular components such as the endoplasmic reticulum (ER) or the cytoskeleton, Bcl-2-associated X protein (BAX)-induced apoptotic process, or crista phenotypes in genetically modified cells, all at sub-100 nm resolution. Thereby, this work offers a versatile tool for studying mitochondrial IM architecture and dynamics in a multiplexed manner.

Mitochondria are the powerhouses of the cell and influence key signaling pathways of cell homeostasis, proliferation, and death (1, 2). Due to their dynamic behavior and abundant interactions with other organelles, mitochondrial research has been particularly driven by the development of fluorescence microscopy (3). However, the delicate double-membrane structure of mitochondria remains invisible using conventional fluorescence microscopes featuring a resolution limit of roughly 200 nm. Surrounded by a smooth outer membrane, the contiguous mitochondrial IM forms numerous lamellar to tubular cristae, membrane invaginations that enhance the overall surface of the IM (4, 5). Crista junctions (CJs), small structures with a diameter of about 20 nm, connect the invaginations to the residual part of the IM and anchor the cristae along the organelle. Cristae are densely stacked along the mitochondrial tubules of most cell types, which can lead to crista-to-crista distances of below 100 nm (6, 7). Due to this intricate arrangement of the cristae, electron microscopy of fixed specimens has been the only tool to capture the unique mitochondrial membrane architecture for decades. However, STED nanoscopy and structured illumination microscopy (SIM) recently allowed to image cristae also in living cells, with the former offering a better spatial resolution of around 40 to 50 nm (3, 810) and the latter giving rise to faster imaging recording and longer imaging durations at about 100 to 120 nm resolution (11).Like all nanoscopy techniques, STED nanoscopy relies on suitable fluorophores to reach its full potential. In the past several years, a handful of new mitochondrial labels facilitated the first live-cell nanoscopic captures of mitochondrial cristae and revealed their dynamic behavior (6, 12, 13). STED nanoscopy using MitoPB Yellow, COX8A-SNAP-SiR, or MitoESq 635 all have demonstrated sub-100 nm resolution imaging of the IM (6, 12, 14). MitoPB Yellow (λex = 488 mm, λSTED = 660 nm) and MitoESq (λex = 633 mm, λSTED = 775 nm) are mitochondria-targeting, lipophilic dyes with remarkable photostability for time-lapse recordings. Arguably, the widespread use of these two dyes has so far been prevented by phototoxicity or by the lack of combinability with other STED dyes. A different approach utilized the self-labeling SNAP-tag targeted to the IM for subsequent labeling using a SiR dye (SNAP-Cell 647-SiR, λex = 633 nm, λSTED = 775 nm) (15). Different to the membrane stains, the latter labeling strategy is generally applicable to the imaging of various mitochondrial proteins (6, 1618), but it requires genetic manipulation and often involves overexpression of the fusion proteins. STED nanoscopy of mitochondria labeled by SiR causes low photodamage (19) but suffers from rapid photobleaching during image acquisition, which strongly restricts the number of recordable frames when used to label cristae (6, 16).From a technological point of view, the next challenge in nanoscopic live-cell imaging of mitochondrial dynamics is to facilitate long-term time-lapse imaging, 3D analysis, and multiplexed recordings. To meet these demands, the next generation mitochondrial marker should feature: 1) simple and robust protocol of highlighting mitochondrial structures in various cells and tissues; 2) high brightness and photostability, compatibility with a 775-nm STED laser which is available at most commercial STED microscopes, and compatibility with popular orthogonal nanoscopy dyes such as SiR for multi-color analysis; and 3) reduced phototoxicity to retain the integrity of mitochondria even under strong illumination.As nanoscopy techniques generally require higher light doses than diffraction-limited approaches, photodamage and photobleaching can become a key, yet often under-evaluated technical hurdle, for analyzing four-dimensional dynamics (1921). We previously demonstrated that cyanine-COT conjugates (2226) are gentle mitochondrial markers that allow prolonged SIM recordings of cristae (25) (SI Appendix, Fig. S1).Here, we extend the palette of COT-conjugates, introducing PKMO, an orange-emitting inner membrane stain with minimal phototoxicity. PKMO is photostable and well-tailored for most commercial STED microscopes. The COT conjugated to PKMO depopulates its triplet state, markedly reducing the photodynamic damage during STED imaging. We demonstrate single-color time-lapse STED recordings of mitochondrial dynamics over the time course of several minutes as well as 3D STED recordings of live mitochondria in cultivated cells. We demonstrate that PKMO can be combined with widely used fluorescent labels, enabling simultaneous localization of cristae along with mitochondrial protein complexes, mitochondrial DNA (mtDNA), or cellular organelles like the ER using multi-color nanoscopy. By resolving different crista morphologies in living cells, we demonstrate that PKMO can pave the way for nanoscopy-based chemical and genetic screenings on mitochondria.  相似文献   

2.
Recent advances in optical microscopy have enabled biological imaging beyond the diffraction limit at nanometer resolution. A general feature of most of the techniques based on photoactivated localization microscopy (PALM) or stochastic optical reconstruction microscopy (STORM) has been the use of thin biological samples in combination with total internal reflection, thus limiting the imaging depth to a fraction of an optical wavelength. However, to study whole cells or organelles that are typically up to 15 μm deep into the cell, the extension of these methods to a three-dimensional (3D) super resolution technique is required. Here, we report an advance in optical microscopy that enables imaging of protein distributions in cells with a lateral localization precision better than 50 nm at multiple imaging planes deep in biological samples. The approach is based on combining the lateral super resolution provided by PALM with two-photon temporal focusing that provides optical sectioning. We have generated super-resolution images over an axial range of ≈10 μm in both mitochondrially labeled fixed cells, and in the membranes of living S2 Drosophila cells.  相似文献   

3.
Optical imaging is crucial for addressing fundamental problems in all areas of life science. With the use of confocal and two‐photon fluorescence microscopy, complex dynamic structures and functions in a plethora of tissue and cell types have been visualized. However, the resolution of ‘classical’ optical imaging methods is poor due to the diffraction limit and does not allow resolution of the cellular microcosmos. On the other hand, the novel stimulated emission depletion (STED) microscopy technique, because of its targeted on/off‐switching of fluorescence, is not hampered by a diffraction‐limited resolution barrier. STED microscopy can therefore provide much sharper images, permitting nanoscale visualization by sequential imaging of individual‐labelled biomolecules, which should allow previous findings to be reinvestigated and provide novel information. The aim of this review is to highlight promising developments in and applications of STED microscopy and their impact on unresolved issues in biomedical science.  相似文献   

4.
Optical microscopy has played a critical role for discovery in biomedical sciences since Hooke’s introduction of the compound microscope. Recent years have witnessed explosive growth in optical microscopy tools and techniques. Information in microscopy is garnered through contrast mechanisms, usually absorption, scattering, or phase shifts introduced by spatial structure in the sample. The emergence of nonlinear optical contrast mechanisms reveals new information from biological specimens. However, the intensity dependence of nonlinear interactions leads to weak signals, preventing the observation of high-speed dynamics in the 3D context of biological samples. Here, we show that for second harmonic generation imaging, we can increase the 3D volume imaging speed from sub-Hertz speeds to rates in excess of 1,500 volumes imaged per second. This transformational capability is possible by exploiting coherent scattering of second harmonic light from an entire specimen volume, enabling new observational capabilities in biological systems.The need for high-speed 3D imaging is driven by open questions regarding the dynamics of biological organisms. Proper functioning of multicellular organisms relies critically on 4D spatiotemporal organization, requiring precise timing and coordination from distinct spatial regions. In the brain, neurons communicate and process information electrically in the form of millisecond-duration action potential (AP) spikes that propagate between regions of highly connected neural circuits. These circuits process information and produce memory and motor commands (1). Although spatial connections and morphology can be observed with conventional imaging methods, there is a dearth of information regarding how these neural connections behave dynamically, which is essential for developing an understanding of the functional behavior of neural circuitry.Neural dynamics are often studied with electrophysiology measurements, which attain submillisecond temporal resolution yet are limited in their spatial resolution, even for multielectrode arrays (2) that admit recording from many sites. A critical need exists for imaging techniques that can map the neural connections and interactions at a spatial resolution sufficient to resolve individual neural cells yet observe a 3D network of potentially hundreds of neurons at a time scale sufficient to resolve APs. Optical microscopy currently achieves part of this imaging requirement with subcellular spatial resolution in three dimensions, both with linear confocal and nonlinear two-photon laser-scanning imaging modalities (3). Advances in high-speed laser-scanning microscopy (LSM) have improved the temporal resolution of 3D imaging, but millisecond time scale volumetric imaging remains challenging (SI Methods, section 2).Current high spatial resolution images of AP behavior use optical reporters of neural activity that lead to changes in fluorescence or second harmonic generation (SHG). Common calcium (Ca2+) indicator probes are constrained by diffusion, saturation effects, and fluorescent decay of several hundred milliseconds that can obscure rapid membrane potential firing rates (46). Additionally, Ca2+ indicators lack the ability to capture subthreshold events, resulting in a scarcity of information on factors that drive a cell to threshold (7). SHG probes that directly report on the membrane potential alleviate many of these technical challenges (8, 9). Rise times and decay of SHG signal can be nearly instantaneous (4), whereas fluorescent signals decay over hundreds of nanoseconds. Compared with fluorescence, SHG probes are background-free because signals only originate from noncentrosymmetrical molecules arranged in an ordered fashion when inserted into the membrane. SHG also provides label-free contrast for structural tissues (10), and high-speed 3D imaging can be vital to answering questions about other biological systems, such as the morphological dynamics of developing hearts (SI Methods, section 1). SHG contrast mechanisms present advantages for high-speed biological imaging, yet the update rate of SHG imaging has lacked sufficient speed to take advantage of these capabilities.Formation of a high-quality 4D SHG image of a biological specimen through nonlinear optical contrast is challenging due to the low rates of signal generation from the nonlinear optical interaction (10). Conventional 3D SHG imaging uses LSM with tightly focused laser pulses to collect enough photons from each image volume element. The rate of 3D SHG LSM image formation is limited to sub-Hertz update rates because each voxel is serially acquired (SI Methods, section 2).In this article, we exploit the coherence of SHG scattering combined with interferometric off-axis holography (11), which allows the capture of an entire 3D volume in a 2D image format (1214), to increase the 4D SHG imaging speed dramatically. We show high-quality, single-acquisition images with subcellular resolution captured in as little as 20 μs, which is 500-fold faster than our previous work (13). Continuously updated images of 3D volumes on the order of 30 μm3 are captured at update rates exceeding 1,500 volumes per second—a more than 190-fold increase over previous 3D imaging results (14) and more than 8,000-fold faster than conventional LSM SHG microscopy (SI Methods, section 8). We derive and experimentally validate the conditions for optimizing coherent image formation for high update rate 4D imaging. Particle tracking with accurate position and velocity recovery is demonstrated. In continuous imaging mode, we are able to follow a 3D volume with a spatial resolution of ∼0.85 μm at more than 1,500 3D volume images per second. We validate the accuracy of SHG holography by comparing standard 3D LSM and holographic reconstructions of tissue samples. Additionally, we demonstrate imaging through thick-scattering tissue slices, showing the potential to image high-speed dynamics from SHG signals generated by in vitro slice animal models. The unparalleled speed of SHG holography, coupled with its ability to image weak object fields with high resolution over a broad field of view, represents a significant step in filling the high-speed 4D imaging gap that currently exists for nonlinear optical modalities.  相似文献   

5.
Superresolution fluorescence microscopy overcomes the diffraction resolution barrier and allows the molecular intricacies of life to be revealed with greatly enhanced detail. However, many current superresolution techniques still face limitations and their implementation is typically associated with a steep learning curve. Patterned illumination-based superresolution techniques [e.g., stimulated emission depletion (STED), reversible optically-linear fluorescence transitions (RESOLFT), and saturated structured illumination microscopy (SSIM)] require specialized equipment, whereas single-molecule-based approaches [e.g., stochastic optical reconstruction microscopy (STORM), photo-activation localization microscopy (PALM), and fluorescence-PALM (F-PALM)] involve repetitive single-molecule localization, which requires its own set of expertise and is also temporally demanding. Here we present a superresolution fluorescence imaging method, photochromic stochastic optical fluctuation imaging (pcSOFI). In this method, irradiating a reversibly photoswitching fluorescent protein at an appropriate wavelength produces robust single-molecule intensity fluctuations, from which a superresolution picture can be extracted by a statistical analysis of the fluctuations in each pixel as a function of time, as previously demonstrated in SOFI. This method, which uses off-the-shelf equipment, genetically encodable labels, and simple and rapid data acquisition, is capable of providing two- to threefold-enhanced spatial resolution, significant background rejection, markedly improved contrast, and favorable temporal resolution in living cells. Furthermore, both 3D and multicolor imaging are readily achievable. Because of its ease of use and high performance, we anticipate that pcSOFI will prove an attractive approach for superresolution imaging.  相似文献   

6.
Revascularization following brain trauma is crucial to the repair process. We used optical micro-angiography (OMAG) to study endogenous revascularization in living mice following brain injury. OMAG is a volumetric optical imaging method capable of in vivo mapping of localized blood perfusion within the scanned tissue beds down to capillary level imaging resolution. We demonstrated that OMAG can differentiate revascularization progression between traumatized mice with and without soluble epoxide hydrolase (sEH) gene deletion. The time course of revascularization was determined from serial imaging of the traumatic region in the same mice over a one-month period of rehabilitation. Restoration of blood volume at the lesion site was more pronounced in sEH knockout mice than in wild-type mice as determined by OMAG. These OMAG measurements were confirmed by histology and showed that the sEH knockout effect may be involved in enhancing revascularization. The correlation of OMAG with histology also suggests that OMAG is a useful imaging tool for real-time in vivo monitoring of post-traumatic revascularization and for evaluating agents that inhibit or promote endogenous revascularization during the recovery process in small rodents.  相似文献   

7.
We present a plane-scanning RESOLFT [reversible saturable/switchable optical (fluorescence) transitions] light-sheet (LS) nanoscope, which fundamentally overcomes the diffraction barrier in the axial direction via confinement of the fluorescent molecular state to a sheet of subdiffraction thickness around the focal plane. To this end, reversibly switchable fluorophores located right above and below the focal plane are transferred to a nonfluorescent state at each scanning step. LS-RESOLFT nanoscopy offers wide-field 3D imaging of living biological specimens with low light dose and axial resolution far beyond the diffraction barrier. We demonstrate optical sections that are thinner by 5–12-fold compared with their conventional diffraction-limited LS analogs.Far-field nanoscopy (1, 2) methods discern features within subdiffraction distances by briefly forcing their molecules to two distinguishable states for the time period of detection. Typically, fluorophores are switched between a signaling “on” and a nonsignaling (i.e., dark) “off” state. Depending on the switching and fluorescence registration strategy used, these superresolution techniques can be categorized into coordinate-stochastic and coordinate-targeted approaches (2). The latter group of methods, comprising the so-called RESOLFT [reversible saturable/switchable optical (fluorescence) transitions] (1, 37) approaches, have been realized using patterns of switch-off light with one or more zero-intensity points or lines, to single out target point (zero-dimensional) or line (1D) coordinates in space where the fluorophores are allowed to assume the on state. The RESOLFT idea can also be implemented in the inverse mode, by using switch-on light and confining the off state. In any case, probing the presence of molecules in new sets of points or lines at every scanning step produces images.Owing to the nature of the on and off states involved––first excited electronic and ground state––stimulated emission depletion (STED) (3) and saturated structured illumination microscopy (SSIM) (8), which both qualify as variants of the RESOLFT principle, typically apply light intensities in the range of MW/cm2 and above. Especially when imaging sensitive samples where photoinduced changes must be avoided, RESOLFT is preferably realized with fluorophores which lead to the same factor of resolution improvement at much lower intensities of state-switching light. Reversibly switchable fluorescent proteins (RSFPs) are highly suitable for this purpose (47, 9), as transitions between their metastable on and off states require 5 orders of magnitude lower threshold intensities than STED/SSIM to guarantee switch-off. Suitable spectral properties, relatively fast millisecond switching kinetics, and high photostability of recently developed yellow-green-emitting RSFPs like rsEGFP (5), rsEGFP2 (7), and rsEGFP(N205S) (10) compared with early RSFPs have indeed enabled RESOLFT nanoscopy in living cells and tissues. To date, RSFP-based RESOLFT has achieved resolution improvements by factors of 4–5 in rsEGFP2-labeled samples (7). To further reduce the imaging time, massive parallelization of scanning has been reported (10). However, the diffraction-limited axial resolution and lack of background suppression restrict applications to thin samples.Imaging applications typically require careful tuning of imaging parameters including speed, contrast, photosensitivity, and spatial resolution, depending on the information that is sought. Light-sheet fluorescence microscopy (LSFM) (1115) stands out by its ability to balance most of these parameters for 3D imaging of living specimens. Recently reenacted as the selective plane illumination microscope (13), this microscopy mode has sparked increasing interest notably because of its short acquisition times in 3D imaging and low phototoxicity in living specimens. It excites fluorophores only in a thin diffraction-limited slice of the sample, perpendicular to the direction of fluorescence detection. The LS is generated by a cylindrical lens which focuses an expanded laser beam in only one direction onto the specimen or into the back-focal plane of an illumination objective. Alternatively, a single beam is quickly moved as a “virtual” LS (16) across a specimen section.In such conventional LSFM imaging, the lateral resolution is determined by the numerical aperture (N.A.) of the detection objective (17), whereas axial resolution is given by the LS thickness, provided the latter is thinner than the axial extent of the point-spread function describing the imaging process from the focal plane of the detecting lens to the camera. In a previous study, the axial resolution of LSFM was pushed to the diffraction limit by using the full aperture of the illumination objective with Gaussian beams; this was carried out for practically useful combinations of N.A. (e.g., 0.8 for both illumination and detection objectives) permissible in light of the geometrical constraints given by the objective lens dimensions (18). High-N.A. illumination comes with short Rayleigh ranges of Gaussian beams, which inherently limit the field of view (FOV) along the direction of illumination. Scanned Bessel beams for diffraction-limited excitation with a virtual LS (1921) typically offer larger FOVs (22), but side lobes broaden the scanned LS in the axial direction and contribute to phototoxicity outside of the focal plane of detection (20). A more complex approach has used Bessel-beam excitation in combination with structured illumination to obtain near-isotropic (but still diffraction-limited) resolution as measured on fluorescent beads (20), albeit at the cost of acquisition time and reduced contrast due to fluorescence generated by the side lobes. In different work, axial resolution has also been improved about fourfold by acquiring two complementary orthogonal views of the sample using two alternating LSs, followed by computationally fusing image information with a deconvolution incorporating both views (23). LS approaches have also helped suppress out-of-focus background for single-molecule imaging in biological situations (e.g., in ref. 24), including at superresolution (2527).Slight axial resolution improvement beyond the diffraction barrier has been demonstrated by overlapping a Gaussian excitation LS with a STED LS featuring a zero-intensity plane (28). Due to scattering and possibly additional aberrations caused by the wavelength difference between excitation and STED light, the maximal achievable resolution in biological specimens was severely limited. This was the case even in fixed samples. A successful application of LS-STED to living cells or organisms has not been reported. The relatively high average STED laser power required for high resolution gains calls for developing a coordinate-targeted superresolution LS approach with low-power operation, meaning a concept that does not solely rely on changing the way the light is directed to––or collected from––the sample, but a concept that harnesses an “on–off” transition for improved feature separation.  相似文献   

8.
Recently, single-molecule imaging and photocontrol have enabled superresolution optical microscopy of cellular structures beyond Abbe's diffraction limit, extending the frontier of noninvasive imaging of structures within living cells. However, live-cell superresolution imaging has been challenged by the need to image three-dimensional (3D) structures relative to their biological context, such as the cellular membrane. We have developed a technique, termed superresolution by power-dependent active intermittency and points accumulation for imaging in nanoscale topography (SPRAIPAINT) that combines imaging of intracellular enhanced YFP (eYFP) fusions (SPRAI) with stochastic localization of the cell surface (PAINT) to image two different fluorophores sequentially with only one laser. Simple light-induced blinking of eYFP and collisional flux onto the cell surface by Nile red are used to achieve single-molecule localizations, without any antibody labeling, cell membrane permeabilization, or thiol-oxygen scavenger systems required. Here we demonstrate live-cell 3D superresolution imaging of Crescentin-eYFP, a cytoskeletal fluorescent protein fusion, colocalized with the surface of the bacterium Caulobacter crescentus using a double-helix point spread function microscope. Three-dimensional colocalization of intracellular protein structures and the cell surface with superresolution optical microscopy opens the door for the analysis of protein interactions in living cells with excellent precision (20-40?nm in 3D) over a large field of view (12?12?μm).  相似文献   

9.
Scientific cinematography using ultrafast optical imaging is a common tool to study motion. In opaque organisms or structures, X-ray radiography captures sequences of 2D projections to visualize morphological dynamics, but for many applications full four-dimensional (4D) spatiotemporal information is highly desirable. We introduce in vivo X-ray cine-tomography as a 4D imaging technique developed to study real-time dynamics in small living organisms with micrometer spatial resolution and subsecond time resolution. The method enables insights into the physiology of small animals by tracking the 4D morphological dynamics of minute anatomical features as demonstrated in this work by the analysis of fast-moving screw-and-nut–type weevil hip joints. The presented method can be applied to a broad range of biological specimens and biotechnological processes.The best method to study morphological changes of anatomic features and physiological processes is to observe their dynamics in 4D, that is, in real time and in 3D space. To achieve this we have developed in vivo X-ray cine-tomography to gain access to morphological dynamics with unrivaled 4D spatiotemporal resolution. This opens the way to a wide range of hitherto inaccessible, systematic investigations of small animals and biological internal processes such as breathing, circulation, digestion (1), reproduction, and locomotion (2).At the micrometer resolution range, state-of-the-art optical imaging techniques can achieve high magnifications to visualize tissues and even individual cells for 4D studies. These methods however are confined to transparent or fluorescent objects, or are limited either by low penetration depth <1 mm or poor time resolution (3). For optically opaque living organisms X-ray imaging methods are highly appropriate due to the penetrating ability of the radiation. Modern synchrotron radiation facilities provide brilliant and partially coherent radiation suitable for high-resolution volume imaging methods such as X-ray computed microtomography (SR-µCT). For static specimens SR-µCT has proven to be a powerful tool to study small animal morphology in 3D (46). The benefits of various physical contrast mechanisms, high spatial resolution, and short measuring times, as well as enormous sample throughput compared with laboratory X-ray setups, have led to its widespread use in life sciences.Real-time in vivo X-ray imaging with micrometer spatial resolution was realized so far by recording time sequences of 2D projection radiographs of different organisms (1, 6, 7), providing time information about functional dynamics but losing any information about the third spatial dimension.Recently, 4D in vivo X-ray experiments have been performed to study cell migration in frog embryos (8, 9) using tomographic sequences of a few seconds exposure time per tomogram interrupted by longer nonexposure time slots. In this way the authors followed relatively slow dynamics and morphological changes during embryonic development with 2-µm resolution over total time intervals of several hours. The fastest 4D time series yet reported were realized with a temporal resolution of 0.5 s and spatial resolution of 25 µm (10), applied to a living caterpillar used as test specimen for imaging, but without any analysis of dynamics.In this paper, we demonstrate the quantitative 4D investigation of morphological dynamics by in vivo X-ray 4D cine-tomography, introduced here as the combination of ultrafast SR-µCT and motion analysis procedures. Using this approach allows us to investigate previously inaccessible 3D morphological dynamics in small animals, presently with feature sizes in the micrometer range and with temporal resolution down to a few tens of milliseconds. In the past, ultrafast in vivo imaging was hardly possible for such applications, due to the strongly competing requirements for simultaneous high contrast, high signal-to-noise ratio (SNR), and concurrent low radiation dose, as well as the need for simultaneous high spatial resolution and maximum temporal resolution.In the following we describe how in vivo X-ray 4D cine-tomography meets the above challenges by optimizing image contrast, SNR, and spatial and temporal resolution in the ultrafast SR-µCT system and by establishing a dedicated data analysis pipeline, all within a unified framework (Fig. S1). We demonstrate the potential of the technique by investigating morphological dynamics in fast-moving weevils, focusing here on the exoskeletal joints.  相似文献   

10.
11.
The signal and resolution during in vivo imaging of the mouse brain is limited by sample-induced optical aberrations. We find that, although the optical aberrations can vary across the sample and increase in magnitude with depth, they remain stable for hours. As a result, two-photon adaptive optics can recover diffraction-limited performance to depths of 450 μm and improve imaging quality over fields of view of hundreds of microns. Adaptive optical correction yielded fivefold signal enhancement for small neuronal structures and a threefold increase in axial resolution. The corrections allowed us to detect smaller neuronal structures at greater contrast and also improve the signal-to-noise ratio during functional Ca2+ imaging in single neurons.The ability to visualize biological systems in vivo has been a major attraction of optical microscopy, because studying biological systems as they evolve in their natural, physiological state provides relevant information that in vitro preparations often do not allow (1). However, for conventional optical microscopes to achieve their optimal, diffraction-limited resolution, the specimen needs to have identical optical properties to those of the immersion media for which the microscope objective is designed. For example, one of the most widely applied microscopy techniques for in vivo imaging, two-photon fluorescence microscopy, often uses water-dipping objectives. Because biological samples are comprised of structures (i.e., proteins, nuclear acids, and lipids) with refractive indices different from that of water, they induce optical aberrations to the incoming excitation wave and result in an enlarged focal spot within the sample and a concomitant deterioration of signal and resolution (2, 3). As a result, the resolution and contrast of optical microscopes is compromised in vivo, especially deep in tissue.Many questions related to how the brain processes information on both the neuronal circuit level and the cell biological level can be addressed by observing the morphology and activity of neurons inside a living and, preferably, awake and behaving mouse (1). In a typical experiment, an area of the skull is surgically removed and replaced with a cover glass to provide optical access to the underlying structure of interest (4). For imaging during behavior, the cover glass is often attached to an optically transparent plug embedded in the skull to improve mechanical stability and to prevent the skull from growing back and blocking the optical access (5, 6) (Fig. 1A). Before the excitation light of a two-photon microscope reaches the desired focal plane inside the brain, it has to traverse first the cranial window and then the brain tissue, both with optical properties different from water. Thus, they both impart optical aberrations on the excitation light, which leads to a distorted focus, even at the surface of the brain.Open in a separate windowFig. 1.AO improves imaging quality in vivo: (A) schematic of the geometry for in vivo imaging in the mouse brain, showing the cranial window (green) embedded in the skull (pink) to provide stability to the brain as well as optical access. (B) Lateral and axial images of a 2-μm-diameter bead 170 μm below the brain surface before and after AO correction. (C) Axial signal profiles along the white line in B before and after AO correction. (D) Lateral and axial images of GFP-expressing dendritic processes over a field centered on the bead in A. (E) Axial signal profiles along the white line in D. (F) Measured aberrated wavefront in units of excitation wavelength. (G) Lateral and axial images of GFP-expressing neurons 110 μm below the surface of the brain with and without AO correction. (H) Axial signal profiles along the white line in G. (I) Axial signal profiles along the blue line in G. (J) Aberrated wavefront measured in units of excitation wavelength. (Scale bars: 2 μm in B and 10 μm elsewhere.)These sample-induced aberrations can be corrected with adaptive optics (AO) to recover diffraction-limited resolution. In AO, a wavefront-shaping device modifies the phase of the excitation light before it enters the sample in such a way as to cancel out the phase errors induced by the sample (7). Originally developed for applications in astronomy, the most common AO setup uses a sensor to measure the wavefront after it passes through the aberrating medium (e.g., atmosphere in astronomical AO). This information is then used to control the wavefront-shaping device, which is usually a deformable mirror or a spatial light modulator (SLM) (8). However, this direct wavefront-sensing approach is not suitable for imaging in vivo. For one, it is not possible to place the wavefront sensor past the aberrating medium, which in this case would still be within the brain. Other approaches where the wavefront of the light reflected from the sample is directly measured are limited, because the strong scattering of light in brain tissue scrambles the information in the reflected wavefront (9, 10).Recently, we developed an image-based AO approach that does not require direct wavefront measurement and that is insensitive to sample scattering (11). By comparing images of the sample taken with different segments of the pupil illuminated, the local slope of the wavefront is measured from image shift. The phase offset for each segment is then either measured directly via interference or calculated by using phase reconstruction algorithms similar to those developed for astronomical AO. This pupil-segmentation-based approach as applied to two-photon fluorescence microscopy can recover diffraction-limited performance in both biological and nonbiological samples, including fixed brain slices. The question remains, however, whether the same enhancements can be achieved during two-photon imaging in the intact mouse. Issues that must be addressed include how fast optical aberrations evolve in vivo, what the magnitude and complexity of their spatial variation are, and to what degree adaptive optical correction can improve both the signal and the resolution during morphological and/or functional imaging. Here we answer these questions and demonstrate that we can recover diffraction-limited resolution at a depth of 450 μm in the cortex of the living mouse.  相似文献   

12.
We have combined ultrasensitive magnetic resonance force microscopy (MRFM) with 3D image reconstruction to achieve magnetic resonance imaging (MRI) with resolution <10 nm. The image reconstruction converts measured magnetic force data into a 3D map of nuclear spin density, taking advantage of the unique characteristics of the “resonant slice” that is projected outward from a nanoscale magnetic tip. The basic principles are demonstrated by imaging the 1H spin density within individual tobacco mosaic virus particles sitting on a nanometer-thick layer of adsorbed hydrocarbons. This result, which represents a 100 million-fold improvement in volume resolution over conventional MRI, demonstrates the potential of MRFM as a tool for 3D, elementally selective imaging on the nanometer scale.  相似文献   

13.
We demonstrate how a conventional confocal spinning-disk (CSD) microscope can be converted into a doubly resolving image scanning microscopy (ISM) system without changing any part of its optical or mechanical elements. Making use of the intrinsic properties of a CSD microscope, we illuminate stroboscopically, generating an array of excitation foci that are moved across the sample by varying the phase between stroboscopic excitation and rotation of the spinning disk. ISM then generates an image with nearly doubled resolution. Using conventional fluorophores, we have imaged single nuclear pore complexes in the nuclear membrane and aggregates of GFP-conjugated Tau protein in three dimensions. Multicolor ISM was shown on cytoskeletal-associated structural proteins and on 3D four-color images including MitoTracker and Hoechst staining. The simple adaptation of conventional CSD equipment allows superresolution investigations of a broad variety of cell biological questions.Fluorescence microscopy is an extremely powerful research tool in the life sciences. It combines highest sensitivity with molecular specificity and exceptional image contrast. However, as with all light-based microscopy techniques, its resolution is limited by the diffraction of light to a typical lateral resolution of ∼200 nm and an axial resolution of ∼500 nm (for 500-nm wavelength light). Only recently, this diffraction limit was broken by using the quantum, or nonlinear, character of fluorescence excitation and emission. The first of these superresolution methods was stimulated emission depletion (STED) microscopy (1). Later, methods based on single-molecule localization, such as photoactivated localization microscopy (PALM) (2) and stochastic optical reconstruction microscopy (STORM) (3), joined the field. These methods “break” the diffraction limit because they all use principles that operate beyond the diffraction of light.Although still bound to light diffraction, increased spatial resolution can be achieved in a class of advanced resolution methods that exploit a clever combination of excitation and detection modalities (47). Although these methods do not reach the resolution achievable with STED, PALM, STORM, and related techniques, they do not require any specialized labels or high excitation intensities, and they may be applied to any fluorescent sample at any excitation/emission wavelength. The most prominent example of this class is structured illumination microscopy (SIM) (5), in which one scans a sample with a structured illumination pattern while taking images with a wide-field imaging system. Meanwhile, several commercial instruments for SIM have become available. The disadvantages of SIM are its technical complexity, reflected in the rather large cost of the commercially available systems, and its sensitivity to optical imperfections and aberrations, which are unavoidable in biological samples.In a theoretical study in 1988, Sheppard (8) pointed out that it is possible to double the resolution of a scanning confocal microscope in a manner closely related to SIM. In SIM, one starts with a conventional wide-field imaging microscope, and by implementing a scanning structured illumination, one subsequently obtains, after appropriate deconvolution of the recorded images, an image with increased resolution. In image-scanning microscopy (ISM), as proposed by Sheppard, one starts with a conventional confocal microscope that uses a diffraction-limited laser focus for scanning a sample but replaces the point detector typically used for recording the excited fluorescence signal with an imaging detector. Also here, an image with enhanced resolution is obtained after applying an appropriate algorithm to the recorded images.We experimentally realized this idea first in 2010 (4), indeed demonstrating a substantial increase in resolution. The major drawback of this implementation was the slowness of the imaging. At each scan position of the laser focus, an image of the excited region had to be recorded, limiting the scan speed by the frame rate of the imaging camera used. For the small scan area of 2 µm × 2 µm shown with the original ISM setup, data acquisition took 25 s. In 2012, York et al. (6) demonstrated that this limitation may be overcome by using a multifocal excitation scheme. They generated an array of multiple excitation foci by implementing a digital micro-mirror device (DMD) into the excitation path of a wide-field microscope. Using this system, ISM images can be obtained with excellent speed, in two excitation/emission wavelengths (two-color imaging) and in three dimensions. However, this approach requires the incorporation of a DMD with all the necessity of perfect optical alignment.Here, we demonstrate that existing imaging detector-based confocal systems can be converted easily into a doubly resolving ISM system. This mainly includes two kinds of microscopes that are widely available in research laboratories: confocal spinning-disk (CSD) microscopes and rapid laser scanning confocal microscopes with an imaging camera as the detector. We present the results obtained with a CSD system.  相似文献   

14.
Bioluminescence (BLI) and fluorescence imaging (FI) allow for non-invasive detection of viable microorganisms from within living tissue and are thus ideally suited for in vivo probiotic studies. Highly sensitive optical imaging techniques detect signals from the excitation of fluorescent proteins, or luciferase-catalyzed oxidation reactions. The excellent relation between microbial numbers and photon emission allow for quantification of tagged bacteria in vivo with extreme accuracy. More information is gained over a shorter period compared to traditional pre-clinical animal studies. The review summarizes the latest advances in in vivo bioluminescence and fluorescence imaging and points out the advantages and limitations of different techniques. The practical application of BLI and FI in the tracking of lactic acid bacteria in animal models is addressed.  相似文献   

15.
Multifocal structured illumination microscopy (MSIM) provides a twofold resolution enhancement beyond the diffraction limit at sample depths up to 50 µm, but scattered and out-of-focus light in thick samples degrades MSIM performance. Here we implement MSIM with a microlens array to enable efficient two-photon excitation. Two-photon MSIM gives resolution-doubled images with better sectioning and contrast in thick scattering samples such as Caenorhabditis elegans embryos, Drosophila melanogaster larval salivary glands, and mouse liver tissue.Fluorescence microscopy is an invaluable tool for biologists. Protein distributions in cells have an interesting structure down to the nanometer scale, but features smaller than 200–300 nm are blurred by diffraction in widefield and confocal fluorescence microscopes. Superresolution techniques like photoactivated localization microscopy (1), stochastic optical reconstruction microscopy (2), or stimulated emission depletion (STED) (3) microscopy allow the imaging of details beyond the limit imposed by diffraction, but usually trade acquisition speed or straightforward sample preparation. And although STED can provide resolution down to 40 nm, STED-specific fluorophores are recommended and it often requires light intensities that are orders of magnitude above widefield and confocal microscopy. On the other hand, structured illumination microscopy (SIM) (4) gives twice the resolution of a conventional fluorescence microscope with light intensities on the order of widefield microscopes and can be used with most common fluorophores. SIM uses contributions from both the excitation and emission point spread functions (PSFs) to substantially improve the transverse resolution and is generally performed by illuminating the sample with a set of sharp light patterns and collecting fluorescence on a multipixel detector, followed by image processing to recover superresolution detail from the interaction of the light pattern with the sample. A related technique, image scanning microscopy (ISM), uses a scanned diffraction-limited spot as the light pattern (5, 6). Multifocal SIM (MSIM) parallelizes ISM by using many excitation spots (7), and has been shown to produce optically sectioned images with ∼145-nm lateral and ∼400-nm axial resolution at depths up to ∼50 µm and at ∼1 Hz imaging frequency. In MSIM, images are excited with a multifocal excitation pattern, and the resulting fluorescence in the multiple foci are pinholed, locally scaled, and summed to generate an image [multifocal-excited, pinholed, scaled, and summed (MPSS)] with root 2-improved resolution relative to widefield microscopy, and improved sectioning compared with SIM due to confocal-like pinholing. Deconvolution is applied to recover the final MSIM image which has a full factor of 2 resolution improvement over the diffraction limit.MSIM works well in highly transparent samples (such as zebrafish embryos), but performance degrades in light scattering samples (such as the Caenorhabditis elegans embryo). Imaging in scattering samples can be improved by two-photon microscopy (8) and although the longer excitation wavelength reduces the resolution in nondescanned detection configurations, this can be partially offset by descanned detection and the addition of a confocal pinhole into the emission path. Whereas the nondescanned mode collects the most signal, the addition of a pinhole in the emission path of a point-scanning system can improve resolution when the pinhole is closed (9). In practice this is seldom done for biological specimens because signal-to-noise decays as the pinhole diameter decreases (911).SIM is an obvious choice in improving resolution without a dramatic loss in signal-to-noise, but the high photon density needed for efficient two-photon excitation is likely difficult to achieve in the typical widefield SIM configuration. This has led to other methods, such as line scanning (12) to achieve better depth penetration than confocal microscopy and up to twofold improvements in axial resolution (but with only ∼20% gain in lateral resolution). Multiphoton Bessel plane illumination (13) achieved an anisotropic lateral resolution of 180 nm (only in one direction) but requires an instrument design with two objectives in an orthogonal configuration. Cells and embryos can be readily imaged, but the multiaxis design may hinder the intravital imaging of larger specimens. Here, a combination of multiphoton excitation with MSIM is shown to improve both lateral and axial resolutions twofold compared with conventional multiphoton imaging while improving the sectioning and contrast of MSIM in thick, scattering samples.  相似文献   

16.
Raman spectroscopy, amplified by surface enhanced Raman scattering (SERS) nanoparticles, is a molecular imaging modality with ultra-high sensitivity and the unique ability to multiplex readouts from different molecular targets using a single wavelength of excitation. This approach holds exciting prospects for a range of applications in medicine, including identification and characterization of malignancy during endoscopy and intraoperative image guidance of surgical resection. The development of Raman molecular imaging with SERS nanoparticles is presently limited by long acquisition times, poor spatial resolution, small field of view, and difficulty in animal handling with existing Raman spectroscopy instruments. Our goal is to overcome these limitations by designing a bespoke instrument for Raman molecular imaging in small animals. Here, we present a unique and dedicated small-animal Raman imaging instrument that enables rapid, high-spatial resolution, spectroscopic imaging over a wide field of view (> 6 cm2), with simplified animal handling. Imaging of SERS nanoparticles in small animals demonstrated that this small animal Raman imaging system can detect multiplexed SERS signals in both superficial and deep tissue locations at least an order of magnitude faster than existing systems without compromising sensitivity.Raman spectroscopy is a powerful bioanalytical tool based on the inelastic scattering of photons by molecular bonds; as each bond has a characteristic vibrational energy, the spectrum of Raman scatter peaks provides a unique fingerprint for a given sample. In vivo applications previously were limited by the relatively weak Raman effect (fewer than one event per 107 elastic scattering events) and poor depth of penetration (<1 mm). Recently, surface enhanced Raman scattering (SERS) was shown to overcome these limitations, enabling the use of Raman spectroscopy for molecular imaging in small living subjects (14).SERS is a plasmonic effect in which molecules adsorbed on a rough metal surface experience a >106-fold increase in Raman scatter intensity (5). The SERS enhancement may be exploited in vivo by coating Raman active molecules onto gold nanoparticles (6), which can be engineered to target specific disease markers (7). Advantages of this approach, as opposed to other optical spectroscopy techniques, include a high sensitivity of detection, a low intrinsic background signal, the environmental and optical stability of nanoparticles, and the ability to multiplex signals from different biological targets (6). Therefore, Raman molecular imaging is not only attractive for studies in small animals, it is also being developed for clinical translation through endoscopy and as an intraoperative imaging approach to guide surgical resection (8, 9). Despite the substantial increase in signal afforded by the SERS approach, reports of in vivo Raman imaging using SERS are relatively rare. Expansion of this promising technique currently is limited by the lack of a dedicated Raman spectroscopy instrument specifically optimized for small animal imaging.Most SERS studies in vivo use adapted Raman microscopy systems, in which a focused laser spot is scanned in two spatial dimensions (x and y) over the sample and a spectrum is recorded at each (x,y) position on a linear (1D) array CCD. Although this approach maximizes the power density at the sample and the sensitivity of spectral detection, it imposes several limitations that have severely affected the wider development of Raman molecular imaging. First, raster scanning a focused laser spot over the sample to acquire spectroscopic images requires prohibitively long acquisition times (>5 h) to cover an area of just a few square centimeters, making whole-body imaging impossible within the limits of permissible anesthesia duration (4, 10). Second, the use of a tightly focused laser spot makes quantification challenging because of signal fluctuations with the contours of the animal. Third, as spatial resolution is traded against scanning time, relatively poor resolution maps (∼0.5–1 mm) usually are acquired. Finally, the animal must be translated under the laser spot, which adds undesirable complexity to the experimental design. Although several microscope manufacturers have developed methods to increase scanning speeds and sample coverage, addressing two of these points (e.g., Renishaw inVia Streamline and Horiba SWIFT DuoScan), these instruments still suffer long scan times (e.g., ∼1 h for 1 cm2). Furthermore, point-scanning systems that have been reported for wide-area (> 20 cm2) Raman spectroscopic studies of artwork require long scan times because of the small spot size and duration of sample exposure (11).In addition to point scanning, it also is possible to perform direct, or “global,” Raman imaging (10) for in vivo applications (12, 13), in which a wide area (up to 2 cm2) is illuminated and all spatial points of the image are collected simultaneously on a 2D CCD at a single detection wavelength. Global Raman imaging uses a lower power density for illumination, but for in vivo imaging this is not an issue because ultimately we are limited by the American National Standards Institute’s laser safety standards (14). Collecting signal from a narrow spectral band, however, sacrifices the rich spectral fingerprint information, possibly reducing detection sensitivity and making both multiplexing and quantification more challenging (13).To advance the application of Raman spectroscopy in molecular imaging, we designed a unique and dedicated small animal Raman imaging (SARI) instrument. We used a hybrid of the two approaches described above to maximize acquisition speed and spatial resolution, without sacrificing sensitivity or spectral information. Our SARI instrument is a line-scanning system in which a laser line is raster scanned along the x and y axes. A high-sensitivity 2D electron-multiplying CCD (EMCCD) collects both the spatial information for the y axis, parallel to the entrance slit of the spectrometer, and the Raman spectral fingerprint, dispersed perpendicularly. This hybrid method also has been used in Raman microscopy and recently was demonstrated as an essential tool for live-cell Raman imaging in vitro (15). In our system, the laser line is scanned rapidly over a stationary sample using 2D galvanometric mirrors, meaning a wide area can be covered without the need for sample translation. Furthermore, our optical design was optimized specifically to maintain a consistent line profile over a wide area (>6 cm2) to enable small animal imaging.We demonstrate here that the SARI system enables rapid, high-sensitivity, multiplexed nanoparticle detection in vivo, mapping a large sample area (e.g., a full mouse torso) at least an order of magnitude faster than existing microscopy systems, while maintaining the sensitivity, multiplexing capability, and spectral/spatial resolution necessary for this application.  相似文献   

17.
Visualization of three-dimensional (3D) morphological changes in the subcellular structures of a biological specimen is a major challenge in life science. Here, we present an integrated chip-based optical nanoscopy combined with quantitative phase microscopy (QPM) to obtain 3D morphology of liver sinusoidal endothelial cells (LSEC). LSEC have unique morphology with small nanopores (50-300 nm in diameter) in the plasma membrane, called fenestrations. The fenestrations are grouped in discrete clusters, which are around 100 to 200 nm thick. Thus, imaging and quantification of fenestrations and sieve plate thickness require resolution and sensitivity of sub-100 nm along both the lateral and the axial directions, respectively. In chip-based nanoscopy, the optical waveguides are used both for hosting and illuminating the sample. The fluorescence signal is captured by an upright microscope, which is converted into a Linnik-type interferometer to sequentially acquire both superresolved images and phase information of the sample. The multimodal microscope provided an estimate of the fenestration diameter of 119 ± 53 nm and average thickness of the sieve plates of 136.6 ± 42.4 nm, assuming the constant refractive index of cell membrane to be 1.38. Further, LSEC were treated with cytochalasin B to demonstrate the possibility of precise detection in the cell height. The mean phase value of the fenestrated area in normal and treated cells was found to be 161 ± 50 mrad and 109 ± 49 mrad, respectively. The proposed multimodal technique offers nanoscale visualization of both the lateral size and the thickness map, which would be of broader interest in the fields of cell biology and bioimaging.

Far-field optical nanoscopy techniques are frequently used to visualize subcellular structures in biological specimens by surpassing the diffraction limit. Optical nanoscopy encompasses a plethora of techniques, including stimulated emission depletion microscopy (1), structured illumination microscopy (SIM) (2), different variants of single-molecule localization microscopy (SMLM), such as photo-activated localization microscopy (3) and direct stochastic optical reconstruction microscopy (dSTORM) (4), and intensity fluctuation–based techniques such as superresolution optical fluctuation imaging (5). These techniques can help detect subcellular structures (<200 nm) of biological specimens such as lipids, proteins, membrane structures, microtubules, and nucleic acids by specific fluorescence tagging (6). Each technique has respective advantages and disadvantages; for example, SIM has gained popularity for live-cell imaging due to its fast image acquisition time but at limited spatial resolution (7). dSTORM, on the other hand, is slower but offers high resolution for characterization of viral proteins (8) and imaging actin filaments in mammalian cells (9, 10), for example. To reduce the complexity of the typical SMLM setup using a total internal reflection fluorescence (TIRF) configuration, a photonic chip-based optical nanoscopy system was recently proposed (1113). In the chip-based system, a photonic integrated circuit is used to replace the usual free space optics for excitation. The collection, however, is done through free space optics. The main advantage of this configuration is the decoupling of excitation and collection pathways as well as miniaturization of the excitation light path of the system. In chip-based nanoscopy, the TIRF illumination is generated through the evanescent field of waveguides rather than using conventional high magnification and high numerical aperture (N.A.) TIRF lens. The evanescent field in waveguides can be generated over extraordinarily large areas, as it is only defined by the waveguide geometry. The waveguide geometry makes it possible to use any imaging objective lens to image arbitrarily large areas as compared to the traditional TIRF-based dSTORM (12), which is limited by the field of view (FOV) of the TIRF lens.Quantitative phase microscopy (QPM) is a label-free optical microscopy technique, which facilitates sensitive measurements of the refractive index and thickness of both biological specimens (14). Various QPM methods have been proposed so far for extracting optical phase and dynamics of biological cells (1517). These techniques offer high phase sensitivity (spatial and temporal), transverse resolution, and high imaging speed (15). The spatial and temporal phase sensitivity of the QPM system is highly dependent on the illumination source and the type of interferometric geometry, respectively (1719). For example, common path QPM techniques offer better temporal phase sensitivity, which can be used to measure membrane fluctuation of the cells (20). In addition, spatial phase sensitivity of the system can be improved by using low-coherence light sources (halogen lamps and light-emitting diodes [LED]) but requires phase-shifting techniques to utilize the whole FOV of the camera (21). A recent advancement in the QPM technique with superior resolution using structured illumination (22, 23) and three-dimensional (3D) information of the samples has been shown by measuring the phase across multiple angles of illumination. This technique facilitates tomography of various biological specimens such as red blood cells, HT29 cells, and bovine embryos (17, 24). Since the lateral resolution of the QPM technique depends on the N.A. of the objective lens, imaging beyond the diffraction limit (<200 nm) is still challenging and limits the study of subcellular structures. Therefore, it is useful to develop multimodality routes in which different microscopy methods can be utilized to provide complementary information about biological specimens such as liver sinusoidal endothelial cells (LSEC).Fig. 1 depicts LSEC that contain large numbers of fenestrations. These transcellular nanopores vary in diameter from 50 to 300 nm, which is just below the diffraction limit of optical microscopy (2527). Fenestrations are typically clustered in groups of 5 to 100 within areas called sieve plates (28). The porous morphology of LSEC acts as an ultrafilter between blood and the underlying hepatocytes, facilitating the bidirectional exchange of substrates between the interior of the liver and blood. For example, smaller viruses and drugs can pass this barrier, while blood cells are retained within the sinusoidal vessel lumen (25, 29). The typical thickness of sieve plates is around 100 to 150 nm (30), so fenestrations are consequently nanoscale sized in all three dimensions. As shown in Fig. 1, the fenestrations in sieve plates form openings through the entire LSEC cell body, and therefore TIRF illumination is ideally suited for imaging these structures. Determining the diameter and number of fenestrations, as well as the height of sieve plate regions, is important, as it can be affected by several drugs and conditions (31, 32). The loss of LSEC porous morphology, a process called defenestration, compromises the filtration properties of the liver, which may lead to atherosclerosis (33). Moreover, aging results in “pseudocapillarization,” whereby LSEC simultaneously lose fenestrations and become thicker (34) (Fig. 1). This is believed to be a main factor contributing to the age-related need to increase doses of drugs targeting hepatocytes (e.g., statins) that have to pass through the fenestrations (35). The number of fenestrations in vitro can be increased using actin disrupting agents such as cytochalasin B (27). This treatment decreases the height of LSEC outside of the nuclear area, which contributes to the formation of new fenestrations (36).Open in a separate windowFig. 1.Top view (A) and cross-sectional view (B) of LSEC. LSEC have unique morphology, in which nanoscopic fenestrations are grouped in thin sieve plates. The diameter of fenestrations and thickness of sieve plates are below the diffraction limit of conventional optical microscopes. The number and size of fenestrations, but also LSEC thickness, can be affected by aging and in liver diseases. In vitro, the number of fenestrations can be increased using actin disrupting agents, such as cytochalasin B (27).Here, we have developed a multimodal chip-based optical nanoscopy and highly sensitive QPM system to visualize the 3D morphological changes in LSEC. The proposed system decouples the light illumination path from the collection path and thus enables a straightforward integration of dSTORM and QPM. The nanoscale phase sensitivity of the QPM technique is utilized to extract the optical thickness of sieve plates. Moreover, chip-based dSTORM supports superresolution imaging down to 50 nm over an extraordinarily large FOV up to millimeter scale (12). Therefore, integration of dSTORM and QPM allows superresolution imaging in the lateral dimension (with dSTORM) and nanometric sensitivity in the axial direction (with QPM). In this work, we demonstrate the capabilities of the system by imaging LSEC with both diffraction-limited TIRF microscopy and dSTORM. The fenestrations and sieve plates are observable with dSTORM, and the average optical thickness of the sieve plate region is obtained using diffraction-limited QPM. Furthermore, we investigated the change in the interior morphology of sieve plates by treating the cell with cytochalasin B (10 μg/mL). The deficiency of lateral resolution of QPM was compensated by dSTORM, which enabled us to localize the sieve plate regions containing subdiffraction-sized fenestration. Therefore, in the cell membrane regions distal from the nucleus, the 3D morphology of LSEC can be reconstructed reliably using our multimodal approach. The integrated system offers a combination of simultaneous functional and quantitative imaging of the cells with large FOV, providing a compact imaging platform with a potential for high-throughput morphological and nanometric imaging for specific biological applications.  相似文献   

18.
We present a lens-free optical tomographic microscope, which enables imaging a large volume of approximately 15 mm(3) on a chip, with a spatial resolution of < 1 μm × < 1 μm × < 3 μm in x, y and z dimensions, respectively. In this lens-free tomography modality, the sample is placed directly on a digital sensor array with, e.g., ≤ 4 mm distance to its active area. A partially coherent light source placed approximately 70 mm away from the sensor is employed to record lens-free in-line holograms of the sample from different viewing angles. At each illumination angle, multiple subpixel shifted holograms are also recorded, which are digitally processed using a pixel superresolution technique to create a single high-resolution hologram of each angular projection of the object. These superresolved holograms are digitally reconstructed for an angular range of ± 50°, which are then back-projected to compute tomograms of the sample. In order to minimize the artifacts due to limited angular range of tilted illumination, a dual-axis tomography scheme is adopted, where the light source is rotated along two orthogonal axes. Tomographic imaging performance is quantified using microbeads of different dimensions, as well as by imaging wild-type Caenorhabditis elegans. Probing a large volume with a decent 3D spatial resolution, this lens-free optical tomography platform on a chip could provide a powerful tool for high-throughput imaging applications in, e.g., cell and developmental biology.  相似文献   

19.
We develop a high-throughput technique to relate positions of individual cells to their three-dimensional (3D) imaging features with single-cell resolution. The technique is particularly suitable for nonadherent cells where existing spatial biology methodologies relating cell properties to their positions in a solid tissue do not apply. Our design consists of two parts, as follows: recording 3D cell images at high throughput (500 to 1,000 cells/s) using a custom 3D imaging flow cytometer (3D-IFC) and dispensing cells in a first-in–first-out (FIFO) manner using a robotic cell placement platform (CPP). To prevent errors due to violations of the FIFO principle, we invented a method that uses marker beads and DNA sequencing software to detect errors. Experiments with human cancer cell lines demonstrate the feasibility of mapping 3D side scattering and fluorescent images, as well as two-dimensional (2D) transmission images of cells to their locations on the membrane filter for around 100,000 cells in less than 10 min. While the current work uses our specially designed 3D imaging flow cytometer to produce 3D cell images, our methodology can support other imaging modalities. The technology and method form a bridge between single-cell image analysis and single-cell molecular analysis.

The isolation and analysis of single cells from a heterogeneous cell population have impacted biomedical research profoundly (1, 2). Single-cell analysis can be broadly divided into two areas, namely, single-cell genomics and single-cell high-content microscopy (35). The former deciphers the genomic and phenotypical information by detecting gene expressions, mutations, and genetic aberrations in individual cells (69). The latter provides high-resolution spatial and morphological information and cell-cell interactions (10). While the toolsets for both approaches have advanced significantly, one remaining technology gap is the lack of effective tools that can connect the two types of single-cell information. That is, no effective tool directly relates the morphological properties to the genomic properties of the very same cell. The emerging field of spatial biology aims to solve this issue via DNA barcoding technologies (1115). However, current methods such as the 10× Visium platform are unable to resolve single cells (16, 17). Microlaser dissectors coupled with a high-resolution microscope can provide single-cell resolution, but the throughput is much too slow to be incorporated into the single-cell workflow in most practical applications (1820). Above all, all current spatial biology techniques cannot address the problem for nonadherent cells such as lymphocytes, for which the connection between the phenotype and morphology and immune response of the same T cell can be particularly insightful.The recent advances in the image-guided cell sorter or image-guided fluorescence-activated cell sorting (FACS) system have made good strides toward this goal (2124). By isolating cells of the same image features based on a predefined gating criterion, one can perform a downstream genomic analysis with cells of similar imaging characteristics (21, 22, 24, 25). However, today’s image-guided FACS produces only two-dimensional (2D) images, lacking the high information contents of three-dimensional (3D) imaging modalities such as confocal microscopes and light-sheet microscopes. Few imaging flow cytometers can produce high-content 3D images of single cells (2629), and none of them, to our knowledge, is able to isolate cells based on the 3D image features due to the technological incompatibility between 3D imagining optics, cell sorting devices, and the great challenges in real-time processing of 3D images required for 3D image-guided cell sorting.To address the above technology gap, we propose and demonstrate an approach that bypasses the requirements for real-time 3D image processing and cell sorting. Our approach combines two key hardware components, namely, a 3D imaging flow cytometer (3D-IFC) and a cell placement robot.We introduced suspended cells into a 3D-IFC to record multiparameter 3D cell images at a throughput as high as 1,000 cells/s. The cells exiting the 3D-IFC system were directed to a cell dispensing robot that placed the exiting cells onto a transparent filter plate in such manners that the cells placed on the filter plate follow the same order in which the cells were imaged by the 3D-IFC. In other words, there is a one-to-one correspondence between the recorded 3D cell image by the 3D-IFC and the position of the cell on the filter plate. However, we need to overcome the follows two obstacles to realize this concept: 1) matching the sequence of hundreds of thousands of cell images from the 3D-IFC to the sequence of (ideally) the same number of cells deposited on the plate and 2) detecting any errors between the two long sequences to prevent error propagation and accumulation.To address the first challenge, we introduced three types of marker beads of distinctive features that can be recognized by an off-the-shelf imager on the cell placement robot to match their sequence to the readout from the 3D-IFC. By assigning each marker bead a symbol (A, T, C) used for DNA sequencing, we were able to use the existing bioinformatics toolbox to sequence and match the two long sequences from the 3D-IFC and the cell plate. These marker beads serve analogously to introns, and the cells of interest between the marker beads can be regarded as exons. Essentially, we used the marker bead sequence (i.e., introns) to align the two long sequences and then interrogate the regions between the marker beads to analyze the cells based on their 3D images. In this approach, we can fully leverage the established bioinformatics tools to support data streams of essentially any length. To address the second challenge, we have developed a robust and efficient error detection methodology to identify two major types of errors—deletion errors and misplacement errors—that can occur in our operation scenario. The detailed algorithm and its results are presented in Results.Overall, we have developed an approach to bridge the technology gap of relating single-cell molecular analyses to single-cell imaging of nonadherent cells in which imaged cells are available for not only genomic analysis but also other applications as well, including the formation of single cell-derived microcolonies, drug response studies, and metabolic and cell secretion analyses. It is worth mentioning that although we used 3D-IFC as a high-throughput imaging tool to acquire cell images here, the methodology is general and can be readily applied to other imaging devices such as 2D imaging cytometers and optical microscopes that can capture images of moving objects and be interfaced with a dispensing system following the first-in–first-out (FIFO) rule in general. Compared to image-guided FACS systems, our design can keep all cells entering the system on a cell-friendly plate to support various downstream analyses and cell processing and allow researchers to retrieve any cells in the system at different times and for different purposes. Finally, we take the approach of cell imaging before placement instead of cell placement before imaging because the former is compatible with the high-throughput flow cytometers as the mainstay of nonadherent single-cell analysis. Our work demonstrates a workflow and technology that enrich the field of single-cell research and spatial biology.  相似文献   

20.
Four-dimensional fluorescence microscopy—which records 3D image information as a function of time—provides an unbiased way of tracking dynamic behavior of subcellular components in living samples and capturing key events in complex macromolecular processes. Unfortunately, the combination of phototoxicity and photobleaching can severely limit the density or duration of sampling, thereby limiting the biological information that can be obtained. Although widefield microscopy provides a very light-efficient way of imaging, obtaining high-quality reconstructions requires deconvolution to remove optical aberrations. Unfortunately, most deconvolution methods perform very poorly at low signal-to-noise ratios, thereby requiring moderate photon doses to obtain acceptable resolution. We present a unique deconvolution method that combines an entropy-based regularization function with kernels that can exploit general spatial characteristics of the fluorescence image to push the required dose to extreme low levels, resulting in an enabling technology for high-resolution in vivo biological imaging.The study of dynamic processes is an important facet of cell biology research. Fluorescently tagged proteins combined with four-dimensional fluorescence microscopy, which records 3D image information as a function of time, provide a powerful framework for studying the dynamics of molecular processes in vivo. One of the most crucial challenges in 4D fluorescence microscopy is to ensure that normal biological function is not significantly perturbed as a result of the high doses of illumination (phototoxicity) incurred during 4D imaging. Recent work indicates that the maximal photon dose that avoids biological perturbation is 100- to 1,000-fold lower than that typically used for in vivo imaging (1). Dose limitations are even more challenging, given the desire to densely sample in time or to record over extended periods, especially in the context of analyzing multiple subcellular components via multiwavelength imaging.Under normal imaging conditions, widefield microscopy combined with image restoration using deconvolution methods provides an excellent modality for multiwavelength 4D imaging as it makes very efficient use of the illuminating photons. However, its effectiveness, in particular its ability to resolve subcellular detail sufficiently in the presence of noise, is limited by the performance of the deconvolution method. Such limitations can seriously degrade image quality at the low signal levels required for unperturbed in vivo imaging. The noise behavior of the deconvolution algorithm is determined by the efficiency of the noise stabilization term, known as the regularization functional. In particular, the functional’s ability to discriminate the noise-related high frequencies from weak high frequencies in the signal ultimately determines the final resolution of the deconvolution. Currently used noise-stabilization techniques are largely based on ad hoc formulations and perform poorly, leading to a serious loss of resolution at the low signal-to-noise ratios required to maintain the illumination at safe levels during multiwavelength 4D imaging. Surmounting this problem would dramatically increase the amount of biological data that could be safely acquired, paving the way for a much deeper understanding of the dynamics of biological processes.We propose a unique deconvolution method that uses a regularization functional constructed using an entropy-based formalism that is tailored to exploit general spatial characteristics of the fluorescence images combined with the more robust use of second-order derivatives in the regularization functional. This entropic-based regularization suppresses large amounts of noise while at the same time preserving the essential details. Hence the method brings out details that are nearly invisible in the raw extremely noisy images and yields a substantially improved resolution. Using several datasets of fixed samples recorded at high and low doses, we quantitatively study the performance of our method, using Fourier shell correlation methods, and demonstrate that entropy-regularized deconvolution (ER-Decon) reveals considerably more detail of the underlying structure compared with existing methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号