首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
3.
Recently, deep learning-based denoising methods have been gradually used for PET images denoising and have shown great achievements. Among these methods, one interesting framework is conditional deep image prior (CDIP) which is an unsupervised method that does not need prior training or a large number of training pairs. In this work, we combined CDIP with Logan parametric image estimation to generate high-quality parametric images. In our method, the kinetic model is the Logan reference tissue model that can avoid arterial sampling. The neural network was utilized to represent the images of Logan slope and intercept. The patient’s computed tomography (CT) image or magnetic resonance (MR) image was used as the network input to provide anatomical information. The optimization function was constructed and solved by the alternating direction method of multipliers (ADMM) algorithm. Both simulation and clinical patient datasets demonstrated that the proposed method could generate parametric images with more detailed structures. Quantification results showed that the proposed method results had higher contrast-to-noise (CNR) improvement ratios (PET/CT datasets: 62.25%±29.93%; striatum of brain PET datasets : 129.51%±32.13%, thalamus of brain PET datasets: 128.24%±31.18%) than Gaussian filtered results (PET/CT datasets: 23.33%±18.63%; striatum of brain PET datasets: 74.71%±8.71%, thalamus of brain PET datasets: 73.02%±9.34%) and nonlocal mean (NLM) denoised results (PET/CT datasets: 37.55%±26.56%; striatum of brain PET datasets: 100.89%±16.13%, thalamus of brain PET datasets: 103.59%±16.37%).  相似文献   

4.
Variation between stains in histopathology is commonplace across different medical centers. This can have a significant effect on the reliability of machine learning algorithms. In this paper, we propose to reduce performance variability by using -consistent generative adversarial (CycleGAN) networks to remove staining variation. We improve upon the regular CycleGAN by incorporating residual learning. We comprehensively evaluate the performance of our stain transformation method and compare its usefulness in addition to extensive data augmentation to enhance the robustness of tissue segmentation algorithms. Our steps are as follows: first, we train a model to perform segmentation on tissue slides from a single source center, while heavily applying augmentations to increase robustness to unseen data. Second, we evaluate and compare the segmentation performance on data from other centers, both with and without applying our CycleGAN stain transformation. We compare segmentation performances in a colon tissue segmentation and kidney tissue segmentation task, covering data from 6 different centers. We show that our transformation method improves the overall Dice coefficient by 9% over the non-normalized target data and by 4% over traditional stain transformation in our colon tissue segmentation task. For kidney segmentation, our residual CycleGAN increases performance by 10% over no transformation and around 2% compared to the non-residual CycleGAN.  相似文献   

5.
6.
7.
Background and objective:Surgical tool detection, segmentation, and 3D pose estimation are crucial components in Computer-Assisted Laparoscopy (CAL). The existing frameworks have two main limitations. First, they do not integrate all three components. Integration is critical; for instance, one should not attempt computing pose if detection is negative. Second, they have highly specific requirements, such as the availability of a CAD model. We propose an integrated and generic framework whose sole requirement for the 3D pose is that the tool shaft is cylindrical. Our framework makes the most of deep learning and geometric 3D vision by combining a proposed Convolutional Neural Network (CNN) with algebraic geometry. We show two applications of our framework in CAL: tool-aware rendering in Augmented Reality (AR) and tool-based 3D measurement. Methods:We name our CNN as ART-Net (Augmented Reality Tool Network). It has a Single Input Multiple Output (SIMO) architecture with one encoder and multiple decoders to achieve detection, segmentation, and geometric primitive extraction. These primitives are the tool edge-lines, mid-line, and tip. They allow the tool’s 3D pose to be estimated by a fast algebraic procedure. The framework only proceeds if a tool is detected. The accuracy of segmentation and geometric primitive extraction is boosted by a new Full resolution feature map Generator (FrG). We extensively evaluate the proposed framework with the EndoVis and new proposed datasets. We compare the segmentation results against several variants of the Fully Convolutional Network (FCN) and U-Net. Several ablation studies are provided for detection, segmentation, and geometric primitive extraction. The proposed datasets are surgery videos of different patients.Results:In detection, ART-Net achieves 100.0% in both average precision and accuracy. In segmentation, it achieves 81.0% in mean Intersection over Union (mIoU) on the robotic EndoVis dataset (articulated tool), where it outperforms both FCN and U-Net, by 4.5pp and 2.9pp, respectively. It achieves 88.2% in mIoU on the remaining datasets (non-articulated tool). In geometric primitive extraction, ART-Net achieves 2.45 and 2.23 in mean Arc Length (mAL) error for the edge-lines and mid-line, respectively, and 9.3 pixels in mean Euclidean distance error for the tool-tip. Finally, in terms of 3D pose evaluated on animal data, our framework achieves 1.87 mm, 0.70 mm, and 4.80 mm mean absolute errors on the X, Y, and Z coordinates, respectively, and 5.94 angular error on the shaft orientation. It achieves 2.59 mm and 1.99 mm in mean and median location error of the tool head evaluated on patient data.Conclusions:The proposed framework outperforms existing ones in detection and segmentation. Compared to separate networks, integrating the tasks in a single network preserves accuracy in detection and segmentation but substantially improves accuracy in geometric primitive extraction. Overall, our framework has similar or better accuracy in 3D pose estimation while largely improving robustness against the very challenging imaging conditions of laparoscopy. The source code of our framework and our annotated dataset will be made publicly available at https://github.com/kamruleee51/ART-Net.  相似文献   

8.
9.
10.
Segmentation of lumen and vessel contours in intravascular ultrasound (IVUS) pullbacks is an arduous and time-consuming task, which demands adequately trained human resources. In the present study, we propose a machine learning approach to automatically extract lumen and vessel boundaries from IVUS datasets. The proposed approach relies on the concatenation of a deep neural network to deliver a preliminary segmentation, followed by a Gaussian process (GP) regressor to construct the final lumen and vessel contours. A multi-frame convolutional neural network (MFCNN) exploits adjacency information present in longitudinally neighboring IVUS frames, while the GP regression method filters high-dimensional noise, delivering a consistent representation of the contours. Overall, 160 IVUS pullbacks (63 patients) from the IBIS-4 study (Integrated Biomarkers and Imaging Study-4, Trial NCT00962416), were used in the present work. The MFCNN algorithm was trained with 100 IVUS pullbacks (8427 manually segmented frames), was validated with 30 IVUS pullbacks (2583 manually segmented frames) and was blindly tested with 30 IVUS pullbacks (2425 manually segmented frames). Image and contour metrics were used to characterize model performance by comparing ground truth (GT) and machine learning (ML) contours. Median values (interquartile range, IQR) of the Jaccard index for lumen and vessel were 0.913, [0.882,0.935] and 0.940, [0.917,0.957], respectively. Median values (IQR) of the Hausdorff distance for lumen and vessel were 0.196mm, [0.146,0.275]mm and 0.163mm, [0.122,0.234]mm, respectively. Also, the mean value of lumen area predictions, and limits of agreement were 0.19mm2, [1.1,1.5]mm2, while the mean value and limits of agreement of plaque burden were 0.0022, [0.082,0.078]. The results obtained with the model developed in this work allow us to conclude that the proposed machine learning approach delivers accurate segmentations in terms of image metrics, contour metrics and clinically relevant variables, enabling its use in clinical routine by mitigating the costs involved in the manual management of IVUS datasets.  相似文献   

11.
12.
13.
14.
Recent developments in artificial intelligence have generated increasing interest to deploy automated image analysis for diagnostic imaging and large-scale clinical applications. However, inaccuracy from automated methods could lead to incorrect conclusions, diagnoses or even harm to patients. Manual inspection for potential inaccuracies is labor-intensive and time-consuming, hampering progress towards fast and accurate clinical reporting in high volumes. To promote reliable fully-automated image analysis, we propose a quality control-driven (QCD) segmentation framework. It is an ensemble of neural networks that integrate image analysis and quality control. The novelty of this framework is the selection of the most optimal segmentation based on predicted segmentation accuracy, on-the-fly. Additionally, this framework visualizes segmentation agreement to provide traceability of the quality control process. In this work, we demonstrated the utility of the framework in cardiovascular magnetic resonance T1-mapping - a quantitative technique for myocardial tissue characterization. The framework achieved near-perfect agreement with expert image analysts in estimating myocardial T1 value (r=0.987, p<.0005; mean absolute error (MAE)=11.3ms), with accurate segmentation quality prediction (Dice coefficient prediction MAE=0.0339) and classification (accuracy=0.99), and a fast average processing time of 0.39 second/image. In summary, the QCD framework can generate high-throughput automated image analysis with speed and accuracy that is highly desirable for large-scale clinical applications.  相似文献   

15.
16.
17.
Convolutional neural networks (CNNs) are state-of-the-art computer vision techniques for various tasks, particularly for image classification. However, there are domains where the training of classification models that generalize on several datasets is still an open challenge because of the highly heterogeneous data and the lack of large datasets with local annotations of the regions of interest, such as histopathology image analysis. Histopathology concerns the microscopic analysis of tissue specimens processed in glass slides to identify diseases such as cancer. Digital pathology concerns the acquisition, management and automatic analysis of digitized histopathology images that are large, having in the order of 1000002 pixels per image. Digital histopathology images are highly heterogeneous due to the variability of the image acquisition procedures. Creating locally labeled regions (required for the training) is time-consuming and often expensive in the medical field, as physicians usually have to annotate the data. Despite the advances in deep learning, leveraging strongly and weakly annotated datasets to train classification models is still an unsolved problem, mainly when data are very heterogeneous. Large amounts of data are needed to create models that generalize well. This paper presents a novel approach to train CNNs that generalize to heterogeneous datasets originating from various sources and without local annotations. The data analysis pipeline targets Gleason grading on prostate images and includes two models in sequence, following a teacher/student training paradigm. The teacher model (a high-capacity neural network) automatically annotates a set of pseudo-labeled patches used to train the student model (a smaller network). The two models are trained with two different teacher/student approaches: semi-supervised learning and semi-weekly supervised learning. For each of the two approaches, three student training variants are presented. The baseline is provided by training the student model only with the strongly annotated data. Classification performance is evaluated on the student model at the patch level (using the local annotations of the Tissue Micro-Arrays Zurich dataset) and at the global level (using the TCGA-PRAD, The Cancer Genome Atlas-PRostate ADenocarcinoma, whole slide image Gleason score). The teacher/student paradigm allows the models to better generalize on both datasets, despite the inter-dataset heterogeneity and the small number of local annotations used. The classification performance is improved both at the patch-level (up to κ=0.6127±0.0133 from κ=0.5667±0.0285), at the TMA core-level (Gleason score) (up to κ=0.7645±0.0231 from κ=0.7186±0.0306) and at the WSI-level (Gleason score) (up to κ=0.4529±0.0512 from κ=0.2293±0.1350). The results show that with the teacher/student paradigm, it is possible to train models that generalize on datasets from entirely different sources, despite the inter-dataset heterogeneity and the lack of large datasets with local annotations.  相似文献   

18.
In silico tissue models (viz. numerical phantoms) provide a mechanism for evaluating quantitative models of magnetic resonance imaging. This includes the validation and sensitivity analysis of imaging biomarkers and tissue microstructure parameters. This study proposes a novel method to generate a realistic numerical phantom of myocardial microstructure. The proposed method extends previous studies by accounting for the variability of the cardiomyocyte shape, water exchange between the cardiomyocytes (intercalated discs), disorder class of myocardial microstructure, and four sheetlet orientations. In the first stage of the method, cardiomyocytes and sheetlets are generated by considering the shape variability and intercalated discs in cardiomyocyte—cardiomyocyte connections. Sheetlets are then aggregated and oriented in the directions of interest. The morphometric study demonstrates no significant difference (p>0.01) between the distribution of volume, length, and primary and secondary axes of the numerical and real (literature) cardiomyocyte data. Moreover, structural correlation analysis validates that the in-silico tissue is in the same class of disorderliness as the real tissue. Additionally, the absolute angle differences between the simulated helical angle (HA) and input HA (reference value) of the cardiomyocytes (4.3°±3.1°) demonstrate a good agreement with the absolute angle difference between the measured HA using experimental cardiac diffusion tensor imaging (cDTI) and histology (reference value) reported by (Holmes et al., 2000) (3.7°±6.4°) and (Scollan et al. 1998) (4.9°±14.6°). Furthermore, the angular distance between eigenvectors and sheetlet angles of the input and simulated cDTI is much smaller than those between measured angles using structural tensor imaging (as a gold standard) and experimental cDTI. Combined with the qualitative results, these results confirm that the proposed method can generate richer numerical phantoms for the myocardium than previous studies.  相似文献   

19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号