首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Automatic and accurate esophageal lesion classification and segmentation is of great significance to clinically estimate the lesion statuses of the esophageal diseases and make suitable diagnostic schemes. Due to individual variations and visual similarities of lesions in shapes, colors, and textures, current clinical methods remain subject to potential high-risk and time-consumption issues. In this paper, we propose an Esophageal Lesion Network (ELNet) for automatic esophageal lesion classification and segmentation using deep convolutional neural networks (DCNNs). The underlying method automatically integrates dual-view contextual lesion information to extract global features and local features for esophageal lesion classification and lesion-specific segmentation network is proposed for automatic esophageal lesion annotation at pixel level. For the established clinical large-scale database of 1051 white-light endoscopic images, ten-fold cross-validation is used in method validation. Experiment results show that the proposed framework achieves classification with sensitivity of 0.9034, specificity of 0.9718, and accuracy of 0.9628, and the segmentation with sensitivity of 0.8018, specificity of 0.9655, and accuracy of 0.9462. All of these indicate that our method enables an efficient, accurate, and reliable esophageal lesion diagnosis in clinics.  相似文献   

2.

Purpose

Segmentation of the liver from abdominal computed tomography (CT) images is an essential step in some computer-assisted clinical interventions, such as surgery planning for living donor liver transplant, radiotherapy and volume measurement. In this work, we develop a deep learning algorithm with graph cut refinement to automatically segment the liver in CT scans.

Methods

The proposed method consists of two main steps: (i) simultaneously liver detection and probabilistic segmentation using 3D convolutional neural network; (ii) accuracy refinement of the initial segmentation with graph cut and the previously learned probability map.

Results

The proposed approach was validated on forty CT volumes taken from two public databases MICCAI-Sliver07 and 3Dircadb1. For the MICCAI-Sliver07 test dataset, the calculated mean ratios of volumetric overlap error (VOE), relative volume difference (RVD), average symmetric surface distance (ASD), root-mean-square symmetric surface distance (RMSD) and maximum symmetric surface distance (MSD) are 5.9, 2.7 %, 0.91, 1.88 and 18.94 mm, respectively. For the 3Dircadb1 dataset, the calculated mean ratios of VOE, RVD, ASD, RMSD and MSD are 9.36, 0.97 %, 1.89, 4.15 and 33.14 mm, respectively.

Conclusions

The proposed method is fully automatic without any user interaction. Quantitative results reveal that the proposed approach is efficient and accurate for hepatic volume estimation in a clinical setup. The high correlation between the automatic and manual references shows that the proposed method can be good enough to replace the time-consuming and nonreproducible manual segmentation method.
  相似文献   

3.
Accurate delineation of multiple organs is a critical process for various medical procedures, which could be operator-dependent and time-consuming. Existing organ segmentation methods, which were mainly inspired by natural image analysis techniques, might not fully exploit the traits of the multi-organ segmentation task and could not accurately segment the organs with various shapes and sizes simultaneously. In this work, the characteristics of multi-organ segmentation are considered: the global count, position and scale of organs are generally predictable, while their local shape and appearance are volatile. Thus, we supplement the region segmentation backbone with a contour localization task to increase the certainty along delicate boundaries. Meantime, each organ has exclusive anatomical traits, which motivates us to deal with class variability with class-wise convolutions to highlight organ-specific features and suppress irrelevant responses at different field-of-views.To validate our method with adequate amounts of patients and organs, we constructed a multi-center dataset, which contains 110 3D CT scans with 24,528 axial slices, and provided voxel-level manual segmentations of 14 abdominal organs, which adds up to 1,532 3D structures in total. Extensive ablation and visualization studies on it validate the effectiveness of the proposed method. Quantitative analysis shows that we achieve state-of-the-art performance for most abdominal organs, and obtain 3.63 mm 95% Hausdorff Distance and 83.32% Dice Similarity Coefficient on an average.  相似文献   

4.
The accurate diagnosis of various esophageal diseases at different stages is crucial for providing precision therapy planning and improving 5-year survival rate of esophageal cancer patients. Automatic classification of various esophageal diseases in gastroscopic images can assist doctors to improve the diagnosis efficiency and accuracy. The existing deep learning-based classification method can only classify very few categories of esophageal diseases at the same time. Hence, we proposed a novel efficient channel attention deep dense convolutional neural network (ECA-DDCNN), which can classify the esophageal gastroscopic images into four main categories including normal esophagus (NE), precancerous esophageal diseases (PEDs), early esophageal cancer (EEC) and advanced esophageal cancer (AEC), covering six common sub-categories of esophageal diseases and one normal esophagus (seven sub-categories). In total, 20,965 gastroscopic images were collected from 4,077 patients and used to train and test our proposed method. Extensive experiments results have demonstrated convincingly that our proposed ECA-DDCNN outperforms the other state-of-art methods. The classification accuracy (Acc) of our method is 90.63% and the averaged area under curve (AUC) is 0.9877. Compared with other state-of-art methods, our method shows better performance in the classification of various esophageal disease. Particularly for these esophageal diseases with similar mucosal features, our method also achieves higher true positive (TP) rates. In conclusion, our proposed classification method has confirmed its potential ability in a wide variety of esophageal disease diagnosis.  相似文献   

5.
目的探讨基于卷积神经网络(CNN)胸部CT平扫自动检测和分类肋骨骨折的准确性和可行性。方法回顾性搜集A医院2011年1月~2019年1月974例成人肋骨骨折患者,另外收集2019年1月B医院25例,C医院25例成人肋骨骨折患者作为多中心测试集进行鲁棒性验证。三种骨折类型(新鲜骨折、愈合期骨折和陈旧性骨折)的相应CT图像被自动检测并输出为结构化报告。采用精准度、召回率和F1值作为衡量CNN模型诊断效能的指标。检测/诊断时间、精准度、灵敏度、fROC曲线用来比较CNN模型的结构化报告和放射科主治医生的诊断效能。结果CNN模型在所有测试集上具有良好的鲁棒性(平均精准度、平均召回率、平均F1值均≥0.8)。新鲜骨折和愈合期骨折的检测效率略高于陈旧性骨折(平均精准度:0.829,0.867>0.814;平均召回率:0.875,0.870>0.827;平均F1值:0.851,0.868>0.821)。CNN模型输出的结构化报告达到放射科主治医师诊断水平,并且CNN模型的检测时间平均缩短132.07 s。结论利用CNN模型可在较短的时间内自动检测并分类肋骨骨折,达到放射科主治医师的诊断水平,且该模型具有一定的鲁棒性和可行性。  相似文献   

6.
This study proposes a fully automated approach for the left atrial segmentation from routine cine long-axis cardiac magnetic resonance image sequences using deep convolutional neural networks and Bayesian filtering. The proposed approach consists of a classification network that automatically detects the type of long-axis sequence and three different convolutional neural network models followed by unscented Kalman filtering (UKF) that delineates the left atrium. Instead of training and predicting all long-axis sequence types together, the proposed approach first identifies the image sequence type as to 2, 3 and 4 chamber views, and then performs prediction based on neural nets trained for that particular sequence type. The datasets were acquired retrospectively and ground truth manual segmentation was provided by an expert radiologist. In addition to neural net based classification and segmentation, another neural net is trained and utilized to select image sequences for further processing using UKF to impose temporal consistency over cardiac cycle. A cyclic dynamic model with time-varying angular frequency is introduced in UKF to characterize the variations in cardiac motion during image scanning. The proposed approach was trained and evaluated separately with varying amount of training data with images acquired from 20, 40, 60 and 80 patients. Evaluations over 1515 images with equal number of images from each chamber group acquired from an additional 20 patients demonstrated that the proposed model outperformed state-of-the-art and yielded a mean Dice coefficient value of 94.1%, 93.7% and 90.1% for 2, 3 and 4-chamber sequences, respectively, when trained with datasets from 80 patients.  相似文献   

7.
Deep convolutional neural networks (DCNN) achieve very high accuracy in segmenting various anatomical structures in medical images but often suffer from relatively poor generalizability. Multi-atlas segmentation (MAS), while less accurate than DCNN in many applications, tends to generalize well to unseen datasets with different characteristics from the training dataset. Several groups have attempted to integrate the power of DCNN to learn complex data representations and the robustness of MAS to changes in image characteristics. However, these studies primarily focused on replacing individual components of MAS with DCNN models and reported marginal improvements in accuracy. In this study we describe and evaluate a 3D end-to-end hybrid MAS and DCNN segmentation pipeline, called Deep Label Fusion (DLF). The DLF pipeline consists of two main components with learnable weights, including a weighted voting subnet that mimics the MAS algorithm and a fine-tuning subnet that corrects residual segmentation errors to improve final segmentation accuracy. We evaluate DLF on five datasets that represent a diversity of anatomical structures (medial temporal lobe subregions and lumbar vertebrae) and imaging modalities (multi-modality, multi-field-strength MRI and Computational Tomography). These experiments show that DLF achieves comparable segmentation accuracy to nnU-Net (Isensee et al., 2020), the state-of-the-art DCNN pipeline, when evaluated on a dataset with similar characteristics to the training datasets, while outperforming nnU-Net on tasks that involve generalization to datasets with different characteristics (different MRI field strength or different patient population). DLF is also shown to consistently improve upon conventional MAS methods. In addition, a modality augmentation strategy tailored for multimodal imaging is proposed and demonstrated to be beneficial in improving the segmentation accuracy of learning-based methods, including DLF and DCNN, in missing data scenarios in test time as well as increasing the interpretability of the contribution of each individual modality.  相似文献   

8.
Investigations have been made to explore the applicability of an off-the-shelf deep convolutional neural network (DCNN) architecture, residual neural network (ResNet), to the classification of the crystal structure of materials using electron diffraction patterns without prior knowledge of the material systems under consideration. The dataset required for training and validating the ResNet architectures was obtained by the computer simulation of the selected area electron diffraction (SAD) in transmission electron microscopy. Acceleration voltages, zone axes, and camera lengths were used as variables and crystal information format (CIF) files obtained from open crystal data repositories were used as inputs. The cubic crystal system was chosen as a model system and five space groups of 213, 221, 225, 227, and 229 in the cubic system were selected for the test and validation, based on the distinguishability of the SAD patterns. The simulated diffraction patterns were regrouped and labeled from the viewpoint of computer vision, i.e., the way how the neural network recognizes the two-dimensional representation of three-dimensional lattice structure of crystals, for improved training and classification efficiency. Comparison of the various ResNet architectures with varying number of layers demonstrated that the ResNet101 architecture could classify the space groups with the validation accuracy of 92.607%.

The off-the-shelf deep convolutional neural network architecture, ResNet, could classify the space group of materials with cubic crystal structures with the prediction accuracy of 92.607%, using the selected area electron diffraction patterns.  相似文献   

9.
Ischemic stroke lesion and white matter hyperintensity (WMH) lesion appear as regions of abnormally signal intensity on magnetic resonance image (MRI) sequences. Ischemic stroke is a frequent cause of death and disability, while WMH is a risk factor for stroke. Accurate segmentation and quantification of ischemic stroke and WMH lesions are important for diagnosis and prognosis. However, radiologists have a difficult time distinguishing these two types of similar lesions. A novel deep residual attention convolutional neural network (DRANet) is proposed to accurately and simultaneously segment and quantify ischemic stroke and WMH lesions in the MRI images. DRANet inherits the advantages of the U-net design and applies a novel attention module that extracts high-quality features from the input images. Moreover, the Dice loss function is used to train DRANet to address data imbalance in the training data set. DRANet is trained and evaluated on 742 2D MRI images which are produced from the sub-acute ischemic stroke lesion segmentation (SISS) challenge. Empirical tests demonstrate that DRANet outperforms several other state-of-the-art segmentation methods. It accurately segments and quantifies both ischemic stroke lesion and WMH. Ablation experiments reveal that attention modules improve the predictive performance of DRANet.  相似文献   

10.
The interpretation of medical images benefits from anatomical and physiological priors to optimize computer-aided diagnosis applications. Diagnosis also relies on the comprehensive analysis of multiple organs and quantitative measures of soft tissue. An automated method optimized for medical image data is presented for the simultaneous segmentation of four abdominal organs from 4D CT data using graph cuts. Contrast-enhanced CT scans were obtained at two phases: non-contrast and portal venous. Intra-patient data were spatially normalized by non-linear registration. Then 4D convolution using population training information of contrast-enhanced liver, spleen and kidneys was applied to multiphase data to initialize the 4D graph and adapt to patient-specific data. CT enhancement information and constraints on shape, from Parzen windows, and location, from a probabilistic atlas, were input into a new formulation of a 4D graph. Comparative results demonstrate the effects of appearance, enhancement, shape and location on organ segmentation. All four abdominal organs were segmented robustly and accurately with volume overlaps over 93.6% and average surface distances below 1.1mm.  相似文献   

11.
12.
Purpose Improved segmentation of soft objects was sought using a new method that combines level set segmentation with statistical deformation models, using prior knowledge of the shape of an object as well as information derived from the input image. Methods Statistical deformation models were created using Euclidian distance functions of binary data and a multi-hierarchical registration approach based on mutual information metric and demons deformable registration. This approach is motivated by the fact that models based on signed distance maps, traditionally combined with level set segmentation can result in irregular shapes and do not establish explicit correspondences. By using statistical deformation models as representation of shape and a maximum a posteriori (MAP) estimation model to estimate the MAP shape of the object to be segmented, a robust segmentation algorithm using accurate shape models could be developed. Results The accuracy and correctness of the synthesized models was evaluated on different 3D objects (cardiac MRI and spinal CT vertebral segment) and the segmentation algorithm was validated by performing different segmentation tasks using various image modalities. The results of this evaluation are very promising and show the potential utility of the approach. Conclusion Initial results demonstrate the approach is feasible and may be advantageous over alternative segmentation methods. Extensions of the model, which also incorporate prior knowledge about the spatial distribution of grey values, are currently under development.  相似文献   

13.
Variety identification of seeds is critical for assessing variety purity and ensuring crop yield. In this paper, a novel method based on hyperspectral imaging (HSI) and deep convolutional neural network (DCNN) was proposed to discriminate the varieties of oat seeds. The representation ability of DCNN was also investigated. The hyperspectral images with a spectral range of 874–1734 nm were primarily processed by principal component analysis (PCA) for exploratory visual distinguishing. Then a DCNN trained in an end-to-end manner was developed. The deep spectral features automatically learnt by DCNN were extracted and combined with traditional classifiers (logistic regression (LR), support vector machine with RBF kernel (RBF_SVM) and linear kernel (LINEAR_SVM)) to construct discriminant models. Contrast models were built based on the traditional classifiers using full wavelengths and optimal wavelengths selected by the second derivative (2nd derivative) method. The comparison results showed that all DCNN-based models outperformed the contrast models. DCNN trained in an end-to-end manner achieved the highest accuracy of 99.19% on the testing set, which was finally employed to visualize the variety classification. The results demonstrated that the deep spectral features with outstanding representation ability enabled HSI together with DCNN to be a reliable tool for rapid and accurate variety identification, which would help to develop an on-line system for quality detection of oat seeds as well as other grain seeds.

The excellent representation ability of deep spectral features enables hyperspectral imaging combined with deep convolutional neural network to be a powerful tool for large-scale seeds detection in modern seed industry.  相似文献   

14.
Phase Contrast Microscopy (PCM) is an important tool for the long term study of living cells. Unlike fluorescence methods which suffer from photobleaching of fluorophore or dye molecules, PCM image contrast is generated by the natural variations in optical index of refraction. Unfortunately, the same physical principles which allow for these studies give rise to complex artifacts in the raw PCM imagery. Of particular interest in this paper are neuron images where these image imperfections manifest in very different ways for the two structures of specific interest: cell bodies (somas) and dendrites. To address these challenges, we introduce a novel parametric image model using the level set framework and an associated variational approach which simultaneously restores and segments this class of images. Using this technique as the basis for an automated image analysis pipeline, results for both the synthetic and real images validate and demonstrate the advantages of our approach.OCIS codes: (100.0100) Image processing, (100.2960) Image analysis, (100.1830) Deconvolution, (100.3020) Image reconstruction-restoration, (100.3190) Inverse problems  相似文献   

15.
In this article, a novel dual-channel convolutional neural network (DC-CNN) framework is proposed for accurate spectral-spatial classification of hyperspectral image (HSI). In this framework, one-dimensional CNN is utilized to automatically extract the hierarchical spectral features and two-dimensional CNN is applied to extract the hierarchical space-related features, and then a softmax regression classifier is used to combine the spectral and spatial features together and predict classification results eventually. To overcome the problem of the limited available training samples in HSIs, we propose a simple data augmentation method which is efficient and effective for improving HSI classification accuracy. For comparison and validation, we test the proposed method along with three other deep-learning-based HSI classification methods on two real-world HSI data sets. Experimental results demonstrate that our DC-CNN-based method outperforms the state-of-the-art methods by a considerable margin.  相似文献   

16.

Purpose

The nonalcoholic fatty liver disease is the most common liver abnormality. Up to date, liver biopsy is the reference standard for direct liver steatosis quantification in hepatic tissue samples. In this paper we propose a neural network-based approach for nonalcoholic fatty liver disease assessment in ultrasound.

Methods

We used the Inception-ResNet-v2 deep convolutional neural network pre-trained on the ImageNet dataset to extract high-level features in liver B-mode ultrasound image sequences. The steatosis level of each liver was graded by wedge biopsy. The proposed approach was compared with the hepatorenal index technique and the gray-level co-occurrence matrix algorithm. After the feature extraction, we applied the support vector machine algorithm to classify images containing fatty liver. Based on liver biopsy, the fatty liver was defined to have more than 5% of hepatocytes with steatosis. Next, we used the features and the Lasso regression method to assess the steatosis level.

Results

The area under the receiver operating characteristics curve obtained using the proposed approach was equal to 0.977, being higher than the one obtained with the hepatorenal index method, 0.959, and much higher than in the case of the gray-level co-occurrence matrix algorithm, 0.893. For regression the Spearman correlation coefficients between the steatosis level and the proposed approach, the hepatorenal index and the gray-level co-occurrence matrix algorithm were equal to 0.78, 0.80 and 0.39, respectively.

Conclusions

The proposed approach may help the sonographers automatically diagnose the amount of fat in the liver. The presented approach is efficient and in comparison with other methods does not require the sonographers to select the region of interest.
  相似文献   

17.
目的 基于剪切波弹性成像(SWE)量化参数和卷积神经网络建立深度学习(DL)模型预测肾脏病变。方法 采集94例肾脏病变患者(病例组)和109名健康人(对照组)的肾脏超声SWE量化参数。利用卷积神经网络建立DL模型,比较DL模型和支持向量机、随机森林模型预测肾脏病变的敏感度、特异度、准确率和曲线下面积(AUC)。结果 DL模型对预测肾脏病变的敏感度为90.48%,特异度为100%,准确率为95.12%,AUC为0.93;支持向量机模型的敏感度、特异度、准确率和AUC分别为80.74%、80.71%、80.98%、0.90,随机森林模型分别为82.22%、77.87%、80.33%和0.88。DL模型预测敏感度、特异度、准确率和AUC均高于支持向量机和随机森林模型,与支持向量机模型和随机森林模型预测肾脏病变差异均有统计学意义(P均<0.05)。结论 基于SWE量化参数和卷积神经网络的DL模型预测肾脏疾病性能良好,具有一定临床价值。  相似文献   

18.
目的:胎儿二维超声图像不同切面的自动识别分类对于提高医生的工作效率具有十分重要的意义。方法:本文针对传统自动分类方法中需要先对图像进行细致分割再进行特征提取和分类识别、分类速度慢等问题,提出了一种基于深度卷积神经网络(Convolutional neural network,CNN)的胎儿二维超声图像中丘脑横切面的自动识别方法。通过对收集到的胎儿二维超声丘脑横切面进行图像增强等预处理,提出了改进的卷积神经网络算法。结果:该算法避免了对于二维超声图像复杂的前期预处理,可以直接输入原始的二维超声图像,具有很强的适应性和泛化能力。试验结果表明,该方法的识别准确率能达到94.81%。结论:此模型的提出,为医学影像自动识别技术提供了新的参考。  相似文献   

19.
ABSTRACT

In this study, Sentinel-2 optical satellite imagery was acquired over the Peace Athabasca Delta and assessed for its open water classification capabilities using an object-oriented deep learning algorithm . The workflow involved segmenting the satellite data into meaningful image objects, building a Convolutional Neural Network (CNN), training the CNN, and lastly applying the CNN, resulting in probability heat maps of open water (with score values ranging from 0–1). Using the vector segmentation, heat maps were then iteratively assigned final class labels (‘open water’ or ‘other’) based on various probability thresholding. The ensuing open water classifications were assessed against a large validation dataset, and a highest overall accuracy of 96.2% (0.912 kappa coefficient) was achieved, with an open water producer’s accuracy of 98.1%. These results were then compared against a Random Forest (RF) classification, and results indicated that the CNN algorithm outperforms RF in this study site. Additionally, an important component of this study was the optimization of several CNN configurations, including patch size and learning rate; the latter which plays a critical role in model adaptation. The optimized object-oriented CNN and associated results can be used to provide resource managers with accurate surface water extent maps at 10 m resolution.  相似文献   

20.
目的 建立一种联合表面增强拉曼散射(SERS)与卷积神经网络(CNN)的方法,用于耐甲氧西林金黄色葡萄球菌(MRSA)和对甲氧西林敏感金黄色葡萄球菌(MSSA)的准确鉴定。方法 合成带正电的纳米银颗粒(AgNPs+)作为SERS基底,测量MRSA和MSSA的SERS指纹谱并制作数据集,构建浅层一维CNN网络并在该数据上训练生成MRSA/MSSA二分类模型。结果 合成的AgNPs+能够通过静电引力紧密地吸附在细菌菌体表面并产生显著的SERS效应,在654、731 cm-1等7个波段均有明显的拉曼峰被增强出。提出的SERS-CNN方法用于MRSA和MSSA检测的准确率超过94.5%,重现性小于5%,检测灵敏度达到102 cells/mL。结论 建立的联合SERS与CNN的方法可用于MRSA的准确检测。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号