首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A technique using Linnik-based optical coherence microscopy (OCM), with built-in fluorescence microscopy (FM), is demonstrated here to describe cellular-level morphology for fresh porcine and biobank tissue specimens. The proposed method utilizes color-coding to generate digital pseudo-H&E (p-H&E) images. Using the same camera, colocalized FM images are merged with corresponding morphological OCM images using a 24-bit RGB composition process to generate position-matched p-H&E images. From receipt of dissected fresh tissue piece to generation of stitched images, the total processing time is <15 min for a 1-cm2 specimen, which is on average two times faster than frozen-section H&E process for fatty or water-rich fresh tissue specimens. This technique was successfully used to scan human and animal fresh tissue pieces, demonstrating its applicability for both biobank and veterinary purposes. We provide an in-depth comparison between p-H&E and human frozen-section H&E images acquired from the same metastatic sentinel lymph node slice (∼10 µm thick), and show the differences, like elastic fibers of a tiny blood vessel and cytoplasm of tumor cells. This optical sectioning technique provides histopathologists with a convenient assessment method that outputs large-field H&E-like images of fresh tissue pieces without requiring any physical embedment.  相似文献   

2.
In the present study, we propose a novel case-based similar image retrieval (SIR) method for hematoxylin and eosin (H&E) stained histopathological images of malignant lymphoma. When a whole slide image (WSI) is used as an input query, it is desirable to be able to retrieve similar cases by focusing on image patches in pathologically important regions such as tumor cells. To address this problem, we employ attention-based multiple instance learning, which enables us to focus on tumor-specific regions when the similarity between cases is computed. Moreover, we employ contrastive distance metric learning to incorporate immunohistochemical (IHC) staining patterns as useful supervised information for defining appropriate similarity between heterogeneous malignant lymphoma cases. In the experiment with 249 malignant lymphoma patients, we confirmed that the proposed method exhibited higher evaluation measures than the baseline case-based SIR methods. Furthermore, the subjective evaluation by pathologists revealed that our similarity measure using IHC staining patterns is appropriate for representing the similarity of H&E stained tissue images for malignant lymphoma.  相似文献   

3.
Histopathological examination of tissue sections is the gold standard for disease diagnosis. However, the conventional histopathology workflow requires lengthy and laborious sample preparation to obtain thin tissue slices, causing about a one-week delay to generate an accurate diagnostic report. Recently, microscopy with ultraviolet surface excitation (MUSE), a rapid and slide-free imaging technique, has been developed to image fresh and thick tissues with specific molecular contrast. Here, we propose to apply an unsupervised generative adversarial network framework to translate colorful MUSE images into Deep-MUSE images that highly resemble hematoxylin and eosin staining, allowing easy adaptation by pathologists. By eliminating the needs of all sample processing steps (except staining), a MUSE image with subcellular resolution for a typical brain biopsy (5 mm × 5 mm) can be acquired in 5 minutes, which is further translated into a Deep-MUSE image in 40 seconds, simplifying the standard histopathology workflow dramatically and providing histological images intraoperatively.  相似文献   

4.
Photoacoustic tomography (PAT) is an emerging biomedical imaging technology that can realize high contrast imaging with a penetration depth of the acoustic. Recently, deep learning (DL) methods have also been successfully applied to PAT for improving the image reconstruction quality. However, the current DL-based PAT methods are implemented by the supervised learning strategy, and the imaging performance is dependent on the available ground-truth data. To overcome the limitation, this work introduces a new image domain transformation method based on cyclic generative adversarial network (CycleGAN), termed as PA-GAN, which is used to remove artifacts in PAT images caused by the use of the limited-view measurement data in an unsupervised learning way. A series of data from phantom and in vivo experiments are used to evaluate the performance of the proposed PA-GAN. The experimental results show that PA-GAN provides a good performance in removing artifacts existing in photoacoustic tomographic images. In particular, when dealing with extremely sparse measurement data (e.g., 8 projections in circle phantom experiments), higher imaging performance is achieved by the proposed unsupervised PA-GAN, with an improvement of ∼14% in structural similarity (SSIM) and ∼66% in peak signal to noise ratio (PSNR), compared with the supervised-learning U-Net method. With an increasing number of projections (e.g., 128 projections), U-Net, especially FD U-Net, shows a slight improvement in artifact removal capability, in terms of SSIM and PSNR. Furthermore, the computational time obtained by PA-GAN and U-Net is similar (∼60 ms/frame), once the network is trained. More importantly, PA-GAN is more flexible than U-Net that allows the model to be effectively trained with unpaired data. As a result, PA-GAN makes it possible to implement PAT with higher flexibility without compromising imaging performance.  相似文献   

5.
Translating images generated by label-free microscopy imaging, such as Coherent Anti-Stokes Raman Scattering (CARS), into more familiar clinical presentations of histopathological images will help the adoption of real-time, spectrally resolved label-free imaging in clinical diagnosis. Generative adversarial networks (GAN) have made great progress in image generation and translation, but have been criticized for lacking precision. In particular, GAN has often misinterpreted image information and identified incorrect content categories during image translation of microscopy scans. To alleviate this problem, we developed a new Pix2pix GAN model that simultaneously learns classifying contents in the images from a segmentation dataset during the image translation training. Our model integrates UNet+ with seg-cGAN, conditional generative adversarial networks with partial regularization of segmentation. Technical innovations of the UNet+/seg-cGAN model include: (1) replacing UNet with UNet+ as the Pix2pix cGAN’s generator to enhance pattern extraction and richness of the gradient, and (2) applying the partial regularization strategy to train a part of the generator network as the segmentation sub-model on a separate segmentation dataset, thus enabling the model to identify correct content categories during image translation. The quality of histopathological-like images generated based on label-free CARS images has been improved significantly.  相似文献   

6.
Quantitative phase imaging with off-axis digital holography in a microscopic configuration provides insight into the cells’ intracellular content and morphology. This imaging is conventionally achieved by numerical reconstruction of the recorded hologram, which requires the precise setting of the reconstruction parameters, including reconstruction distance, a proper phase unwrapping algorithm, and component of wave vectors. This paper shows that deep learning can perform the complex light propagation task independent of the reconstruction parameters. We also show that the super-imposed twin-image elimination technique is not required to retrieve the quantitative phase image. The hologram at the single-cell level is fed into a trained image generator (part of a conditional generative adversarial network model), which produces the phase image. Also, the model’s generalization is demonstrated by training it with holograms of size 512×512 pixels, and the resulting quantitative analysis is shown.  相似文献   

7.
Second harmonic generation (SHG) microscopy has emerged over the past two decades as a powerful tool for tissue characterization and diagnostics. Its main applications in medicine are related to mapping the collagen architecture of in-vivo, ex-vivo and fixed tissues based on endogenous contrast. In this work we present how H&E staining of excised and fixed tissues influences the extraction and use of image parameters specific to polarization-resolved SHG (PSHG) microscopy, which are known to provide quantitative information on the collagen structure and organization. We employ a theoretical collagen model for fitting the experimental PSHG datasets to obtain the second order susceptibility tensor elements ratios and the fitting efficiency. Furthermore, the second harmonic intensity acquired under circular polarization is investigated. The evolution of these parameters in both forward- and backward-collected SHG are computed for both H&E-stained and unstained tissue sections. Consistent modifications are observed between the two cases in terms of the fitting efficiency and the second harmonic intensity. This suggests that similar quantitative analysis workflows applied to PSHG images collected on stained and unstained tissues could yield different results, and hence affect the diagnostic accuracy.  相似文献   

8.
Because of the rapid spread and wide range of the clinical manifestations of the coronavirus disease 2019 (COVID-19), fast and accurate estimation of the disease progression and mortality is vital for the management of the patients. Currently available image-based prognostic predictors for patients with COVID-19 are largely limited to semi-automated schemes with manually designed features and supervised learning, and the survival analysis is largely limited to logistic regression. We developed a weakly unsupervised conditional generative adversarial network, called pix2surv, which can be trained to estimate the time-to-event information for survival analysis directly from the chest computed tomography (CT) images of a patient. We show that the performance of pix2surv based on CT images significantly outperforms those of existing laboratory tests and image-based visual and quantitative predictors in estimating the disease progression and mortality of COVID-19 patients. Thus, pix2surv is a promising approach for performing image-based prognostic predictions.  相似文献   

9.
High-resolution (HR), isotropic cardiac Magnetic Resonance (MR) cine imaging is challenging since it requires long acquisition and patient breath-hold times. Instead, 2D balanced steady-state free precession (SSFP) sequence is widely used in clinical routine. However, it produces highly-anisotropic image stacks, with large through-plane spacing that can hinder subsequent image analysis. To resolve this, we propose a novel, robust adversarial learning super-resolution (SR) algorithm based on conditional generative adversarial nets (GANs), that incorporates a state-of-the-art optical flow component to generate an auxiliary image to guide image synthesis. The approach is designed for real-world clinical scenarios and requires neither multiple low-resolution (LR) scans with multiple views, nor the corresponding HR scans, and is trained in an end-to-end unsupervised transfer learning fashion. The designed framework effectively incorporates visual properties and relevant structures of input images and can synthesise 3D isotropic, anatomically plausible cardiac MR images, consistent with the acquired slices. Experimental results show that the proposed SR method outperforms several state-of-the-art methods both qualitatively and quantitatively. We show that subsequent image analyses including ventricle segmentation, cardiac quantification, and non-rigid registration can benefit from the super-resolved, isotropic cardiac MR images, to produce more accurate quantitative results, without increasing the acquisition time. The average Dice similarity coefficient (DSC) for the left ventricular (LV) cavity and myocardium are 0.95 and 0.81, respectively, between real and synthesised slice segmentation. For non-rigid registration and motion tracking through the cardiac cycle, the proposed method improves the average DSC from 0.75 to 0.86, compared to the original resolution images.  相似文献   

10.
Deep MRI reconstruction is commonly performed with conditional models that de-alias undersampled acquisitions to recover images consistent with fully-sampled data. Since conditional models are trained with knowledge of the imaging operator, they can show poor generalization across variable operators. Unconditional models instead learn generative image priors decoupled from the operator to improve reliability against domain shifts related to the imaging operator. Recent diffusion models are particularly promising given their high sample fidelity. Nevertheless, inference with a static image prior can perform suboptimally. Here we propose the first adaptive diffusion prior for MRI reconstruction, AdaDiff, to improve performance and reliability against domain shifts. AdaDiff leverages an efficient diffusion prior trained via adversarial mapping over large reverse diffusion steps. A two-phase reconstruction is executed following training: a rapid-diffusion phase that produces an initial reconstruction with the trained prior, and an adaptation phase that further refines the result by updating the prior to minimize data-consistency loss. Demonstrations on multi-contrast brain MRI clearly indicate that AdaDiff outperforms competing conditional and unconditional methods under domain shifts, and achieves superior or on par within-domain performance.  相似文献   

11.
BackgroundMismatch repair deficiency (dMMR) status induced by MLH1 protein deficiency plays a pivotal role in therapeutic decision‐making for cancer patients. Appropriate quality control (QC) materials are necessary for monitoring the accuracy of MLH1 protein deficiency assays used in clinical laboratories.MethodsCRISPR/Cas9 technology was used to edit the MLH1 gene of GM12878Cas9 cells to establish MLH1 protein‐deficient cell lines. The positive cell lines were screened and validated by Sanger sequencing, Western blot (WB), and next‐generation sequencing (NGS) and were then used to prepare formalin‐fixed, paraffin‐embedded (FFPE) samples through xenografting. These FFPE samples were tested by hematoxylin and eosin (H&E) staining and immunohistochemistry (IHC) for suitability as novel QC materials for MLH1 protein deficiency testing.ResultsWe successfully cultured 358 monoclonal cells, with a survival rate of 37.3% (358/960) of the sorted monoclonal cells. Through Sanger sequencing, cell lines with MLH1 gene mutation were identified. Subsequently, two cell lines with MLH1 protein deficiency were identified by WB and named as GM12878Cas9_6 and GM12878Cas9_10. The NGS results further confirmed that the MLH1 gene mutation in these two cell lines would cause the formation of stop codons and terminate the expression of the MLH1 protein. The H&E staining and IHC results also verified the deficiency of the MLH1 protein, and FFPE samples from xenografts proved their similarity and consistency with clinical samples.ConclusionsWe successfully established MLH1 protein‐deficient cell lines. Followed by xenografting, we developed novel FFPE QC materials with homogenous, sustainable, and typical histological structures advantages that are suitable for the standardization of clinical IHC methods.  相似文献   

12.
Digital holography can provide quantitative phase images related to the morphology and content of biological samples. After the numerical image reconstruction, the phase values are limited between −π and π; thus, discontinuity may occur due to the modulo 2π operation. We propose a new deep learning model that can automatically reconstruct unwrapped focused-phase images by combining digital holography and a Pix2Pix generative adversarial network (GAN) for image-to-image translation. Compared with numerical phase unwrapping methods, the proposed GAN model overcomes the difficulty of accurate phase unwrapping due to abrupt phase changes and can perform phase unwrapping at a twice faster rate. We show that the proposed model can generalize well to different types of cell images and has high performance compared to recent U-net models. The proposed method can be useful in observing the morphology and movement of biological cells in real-time applications.  相似文献   

13.
Structured illumination microscopy (SIM) reconstructs optically-sectioned images of a sample from multiple spatially-patterned wide-field images, but the traditional single non-patterned wide-field images are more inexpensively obtained since they do not require generation of specialized illumination patterns. In this work, we translated wide-field fluorescence microscopy images to optically-sectioned SIM images by a Pix2Pix conditional generative adversarial network (cGAN). Our model shows the capability of both 2D cross-modality image translation from wide-field images to optical sections, and further demonstrates potential to recover 3D optically-sectioned volumes from wide-field image stacks. The utility of the model was tested on a variety of samples including fluorescent beads and fresh human tissue samples.  相似文献   

14.
15.
Cervical cytopathology image refocusing is important for addressing the problem of defocus blur in whole slide images. However, most of current deblurring methods are developed for global motion blur instead of local defocus blur and need a lot of supervised re-training for unseen domains. In this paper, we propose a refocusing method for cervical cytopathology images via multi-scale attention features and domain normalization. Our method consists of a domain normalization net (DNN) and a refocusing net (RFN). In DNN, we adopt registration-free cycle scheme for normalizing the unseen unsupervised domains into the seen supervised domain and introduce gray mask loss and hue-encoding mask loss to ensure the consistency of cell structure and basic hue. In RFN, combining the locality and sparseness characteristics of defocus blur, we design a multi-scale refocusing network to enhance the reconstruction of cell nucleus and cytoplasm, and introduce defocus intensity estimation mask to strengthen the reconstruction of local blur. We integrate hybrid learning strategy on the supervised and unsupervised domains to make RFN achieving well refocusing on the unsupervised domain. We build a cervical cytopathology image refocusing dataset and conduct extensive experiments to demonstrate the superiority of our method compared with current deblurring state-of-the-art models. Furthermore, we prove that the refocused images help improve the performance of subsequent high-level analysis tasks. We release the refocusing dataset and source codes to promote the development of this field.  相似文献   

16.
《Remote sensing letters.》2013,4(10):745-754
Object recognition has been one of the hottest issues in the field of remote sensing image analysis. In this letter, a new pixel-wise learning method based on deep belief networks (DBNs) for object recognition is proposed. The method is divided into two stages, the unsupervised pre-training stage and the supervised fine-tuning stage. Given a training set of images, a pixel-wise unsupervised feature learning algorithm is utilized to train a mixed structural sparse restricted Boltzmann machine (RBM). After that, the outputs of this RBM are put into the next RBM as inputs. By stacking several layers of RBM, the deep generative model of DBNs is built. At the fine-tuning stage, a supervised layer is attached to the top of the DBN and labels of the data are put into this layer. The whole network is then trained using the back-propagation (BP) algorithm with sparse penalty. Finally, the deep model generates good joint distribution of images and their labels. Comparative experiments are conducted on our dataset acquired by QuickBird with 60 cm resolution and the recognition results demonstrate the accuracy and efficiency of our proposed method.  相似文献   

17.
Synthetic medical image generation has a huge potential for improving healthcare through many applications, from data augmentation for training machine learning systems to preserving patient privacy. Conditional Adversarial Generative Networks (cGANs) use a conditioning factor to generate images and have shown great success in recent years. Intuitively, the information in an image can be divided into two parts: 1) content which is presented through the conditioning vector and 2) style which is the undiscovered information missing from the conditioning vector. Current practices in using cGANs for medical image generation, only use a single variable for image generation (i.e., content) and therefore, do not provide much flexibility nor control over the generated image.In this work we propose DRAI—a dual adversarial inference framework with augmented disentanglement constraints—to learn from the image itself, disentangled representations of style and content, and use this information to impose control over the generation process. In this framework, style is learned in a fully unsupervised manner, while content is learned through both supervised learning (using the conditioning vector) and unsupervised learning (with the inference mechanism). We undergo two novel regularization steps to ensure content-style disentanglement. First, we minimize the shared information between content and style by introducing a novel application of the gradient reverse layer (GRL); second, we introduce a self-supervised regularization method to further separate information in the content and style variables.For evaluation, we consider two types of baselines: single latent variable models that infer a single variable, and double latent variable models that infer two variables (style and content). We conduct extensive qualitative and quantitative assessments on two publicly available medical imaging datasets (LIDC and HAM10000) and test for conditional image generation, image retrieval and style-content disentanglement. We show that in general, two latent variable models achieve better performance and give more control over the generated image. We also show that our proposed model (DRAI) achieves the best disentanglement score and has the best overall performance.  相似文献   

18.
Retinopathy of prematurity (ROP) is an eye disease, which affects prematurely born infants with low birth weight and is one of the main causes of children''s blindness globally. In recent years, there are many studies on automatic ROP diagnosis, mainly focusing on ROP screening such as “Yes/No ROP” or “Mild/Severe ROP” and presence/absence detection of “plus disease”. Due to the lack of corresponding high-quality annotations, there are few studies on ROP zoning, which is one of the important indicators to evaluate the severity of ROP. Moreover, how to effectively utilize the unlabeled data to train model is also worth studying. Therefore, we propose a novel semi-supervised feature calibration adversarial learning network (SSFC-ALN) for 3-level ROP zoning, which consists of two subnetworks: a generative network and a compound network. The generative network is a U-shape network for producing the reconstructed images and its output is taken as one of the inputs of the compound network. The compound network is obtained by extending a common classification network with a discriminator, introducing adversarial mechanism into the whole training process. Because the definition of ROP tells us where and what to focus on in the fundus images, which is similar to the attention mechanism. Therefore, to further improve classification performance, a new attention mechanism based feature calibration module (FCM) is designed and embedded in the compound network. The proposed method was evaluated on 1013 fundus images of 108 patients with 3-fold cross validation strategy. Compared with other state-of-the-art classification methods, the proposed method achieves high classification performance.  相似文献   

19.
Diagnostic errors in an accident and emergency department   总被引:5,自引:2,他引:3       下载免费PDF全文
Objectives—To describe the diagnostic errors occurring in a busy district general hospital accident and emergency (A&E) department over four years.

Method—All diagnostic errors discovered by or notified to one A&E consultant were noted on a computerised database.

Results—953 diagnostic errors were noted in 934 patients. Altogether 79.7% were missed fractures. The most common reasons for error were misreading radiographs (77.8%) and failure to perform radiography (13.4%). The majority of errors were made by SHOs. Twenty two diagnostic errors resulted in complaints and legal actions and three patients who had a diagnostic error made, later died.

Conclusions—Good clinical skills are essential. Most abnormalities missed on radiograph were not difficult to diagnose. Junior doctors in A&E should receive specific training and be tested on their ability to interpret radiographs correctly before being allowed to work unsupervised.

  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号