首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Accurate detection of breast tumor calcifications is of great significance in assisting doctors’ diagnosis to improve the accuracy of breast cancer early detection. In this article, a different scale of superpixels saliency detection algorithm is used to segment calcifications in breast tumor ultrasound images based on a simple linear iterative cluster. First, a multi-scale saliency segmentation algorithm was used to divide the tumor region of different sizes and weak calcification (Wca) was extracted according to uneven gray distribution and texture contrast between regions. Second, based on single-scale superpixel segmentation of the original image, the strong calcification extraction map was calculated by measuring gray value difference and calcification gray distance features. Finally, the final calcification extraction map was obtained by combining the strong and weak calcification extraction maps. The detection algorithm proposed in this article could effectively detect calcifications in breast ultrasound images.  相似文献   

2.
Deep learning methods provide state of the art performance for supervised learning based medical image analysis. However it is essential that trained models extract clinically relevant features for downstream tasks as, otherwise, shortcut learning and generalization issues can occur. Furthermore in the medical field, trustability and transparency of current deep learning systems is a much desired property. In this paper we propose an interpretability-guided inductive bias approach enforcing that learned features yield more distinctive and spatially consistent saliency maps for different class labels of trained models, leading to improved model performance. We achieve our objectives by incorporating a class-distinctiveness loss and a spatial-consistency regularization loss term. Experimental results for medical image classification and segmentation tasks show our proposed approach outperforms conventional methods, while yielding saliency maps in higher agreement with clinical experts. Additionally, we show how information from unlabeled images can be used to further boost performance. In summary, the proposed approach is modular, applicable to existing network architectures used for medical imaging applications, and yields improved learning rates, model robustness, and model interpretability.  相似文献   

3.
Segmentation of lung pathology in Computed Tomography (CT) images is of great importance for lung disease screening. However, the presence of different types of lung pathologies with a wide range of heterogeneities in size, shape, location, and texture, on one side, and their visual similarity with respect to surrounding tissues, on the other side, make it challenging to perform reliable automatic lesion segmentation. To leverage segmentation performance, we propose a deep learning framework comprising a Normal Appearance Autoencoder (NAA) model to learn the distribution of healthy lung regions and reconstruct pathology-free images from the corresponding pathological inputs by replacing the pathological regions with the characteristics of healthy tissues. Detected regions that represent prior information regarding the shape and location of pathologies are then integrated into a segmentation network to guide the attention of the model into more meaningful delineations. The proposed pipeline was tested on three types of lung pathologies, including pulmonary nodules, Non-Small Cell Lung Cancer (NSCLC), and Covid-19 lesion on five comprehensive datasets. The results show the superiority of the proposed prior model, which outperformed the baseline segmentation models in all the cases with significant margins. On average, adding the prior model improved the Dice coefficient for the segmentation of lung nodules by 0.038, NSCLCs by 0.101, and Covid-19 lesions by 0.041. We conclude that the proposed NAA model produces reliable prior knowledge regarding the lung pathologies, and integrating such knowledge into a prior segmentation network leads to more accurate delineations.  相似文献   

4.
Automated segmentation of pancreatic cancer is vital for clinical diagnosis and treatment. However, the small size and inconspicuous boundaries limit the segmentation performance, which is further exacerbated for deep learning techniques with the few training samples due to the high threshold of image acquisition and annotation. To alleviate this issue caused by the small-scale dataset, we collect idle multi-parametric MRIs of pancreatic cancer from different studies to construct a relatively large dataset for enhancing the CT pancreatic cancer segmentation. Therefore, we propose a deep learning segmentation model with the dual meta-learning framework for pancreatic cancer. It can integrate the common knowledge of tumors obtained from idle MRIs and salient knowledge from CT images, making high-level features more discriminative. Specifically, the random intermediate modalities between MRIs and CT are first generated to smoothly fill in the gaps in visual appearance and provide rich intermediate representations for ensuing meta-learning scheme. Subsequently, we employ intermediate modalities-based model-agnostic meta-learning to capture and transfer commonalities. At last, a meta-optimizer is utilized to adaptively learn the salient features within CT data, thus alleviating the interference due to internal differences. Comprehensive experimental results demonstrated our method achieved the promising segmentation performance, with a max Dice score of 64.94% on our private dataset, and outperformed state-of-the-art methods on a public pancreatic cancer CT dataset. The proposed method is an effective pancreatic cancer segmentation framework, which can be easily integrated into other segmentation networks and thus promises to be a potential paradigm for alleviating data scarcity challenges using idle data.  相似文献   

5.
Breast cancer is one of the most common causes of death among women worldwide. Early signs of breast cancer can be an abnormality depicted on breast images (e.g., mammography or breast ultrasonography). However, reliable interpretation of breast images requires intensive labor and physicians with extensive experience. Deep learning is evolving breast imaging diagnosis by introducing a second opinion to physicians. However, most deep learning-based breast cancer analysis algorithms lack interpretability because of their black box nature, which means that domain experts cannot understand why the algorithms predict a label. In addition, most deep learning algorithms are formulated as a single-task-based model that ignores correlations between different tasks (e.g., tumor classification and segmentation). In this paper, we propose an interpretable multitask information bottleneck network (MIB-Net) to accomplish simultaneous breast tumor classification and segmentation. MIB-Net maximizes the mutual information between the latent representations and class labels while minimizing information shared by the latent representations and inputs. In contrast from existing models, our MIB-Net generates a contribution score map that offers an interpretable aid for physicians to understand the model’s decision-making process. In addition, MIB-Net implements multitask learning and further proposes a dual prior knowledge guidance strategy to enhance deep task correlation. Our evaluations are carried out on three breast image datasets in different modalities. Our results show that the proposed framework is not only able to help physicians better understand the model’s decisions but also improve breast tumor classification and segmentation accuracy over representative state-of-the-art models. Our code is available at https://github.com/jxw0810/MIB-Net.  相似文献   

6.
Breast tumor segmentation is an important step in the diagnostic procedure of physicians and computer-aided diagnosis systems. We propose a two-step deep learning framework for breast tumor segmentation in breast ultrasound (BUS) images which requires only a few manual labels. The first step is breast anatomy decomposition handled by a semi-supervised semantic segmentation technique. The input BUS image is decomposed into four breast anatomical structures, namely fat, mammary gland, muscle and thorax layers. Fat and mammary gland layers are used as constrained region to reduce the search space for breast tumor segmentation. The second step is breast tumor segmentation performed in a weakly-supervised learning scenario where only image-level labels are available. Breast tumors are first recognized by a classification network and then segmented by the proposed class activation mapping and deep level set (CAM-DLS) method. For breast anatomy decomposition, the proposed framework achieves Dice similarity coefficient (DSC) of 83.0 ± 11.8%, 84.3 ± 10.0%, 80.7 ± 15.4% and 91.0 ± 11.4% for fat, mammary gland, muscle and thorax layers, respectively. For breast tumor recognition, the proposed framework achieves sensitivity of 95.8%, precision of 92.4%, specificity of 93.9%, accuracy of 94.8% and F1-score of 0.941. For breast tumor segmentation, the proposed framework achieves DSC of 77.3% and intersection-over-union (IoU) of 66.0%. In conclusion, the proposed framework could efficiently perform breast tumor recognition and segmentation simultaneously in a weakly-supervised setting with anatomical constraints.  相似文献   

7.
Although deep learning models like CNNs have achieved great success in medical image analysis, the small size of medical datasets remains a major bottleneck in this area. To address this problem, researchers have started looking for external information beyond current available medical datasets. Traditional approaches generally leverage the information from natural images via transfer learning. More recent works utilize the domain knowledge from medical doctors, to create networks that resemble how medical doctors are trained, mimic their diagnostic patterns, or focus on the features or areas they pay particular attention to. In this survey, we summarize the current progress on integrating medical domain knowledge into deep learning models for various tasks, such as disease diagnosis, lesion, organ and abnormality detection, lesion and organ segmentation. For each task, we systematically categorize different kinds of medical domain knowledge that have been utilized and their corresponding integrating methods. We also provide current challenges and directions for future research.  相似文献   

8.
目的对乳腺超声图像中的肿瘤进行边缘提取。方法鉴于医学超声图像的信噪比较低,用经典的边缘提取算法无法得到较好的结果,因此,Snake模型作为一种基于高层信息的有效目标轮廓提取算法而引起广泛的关注。在原始的Snake模型的基础上,本文针对超声图像的特点对它进行了一些改进。结果从上海第六人民医院采集到乳腺超声图像15幅。在进行了灰度分割、形态滤波等一系列预处理后,将改进后的Snake模型运用到边缘提取中来,并在这15幅图像中得到了比较好的分割结果。结论改进后的Snake模型可以将乳腺超声图像中肿瘤的边缘较好地提取出来,为乳腺肿瘤计算机辅助诊断提供了重要依据。  相似文献   

9.
The interpretation of medical images is a complex cognition procedure requiring cautious observation, precise understanding/parsing of the normal body anatomies, and combining knowledge of physiology and pathology. Interpreting chest X-ray (CXR) images is challenging since the 2D CXR images show the superimposition on internal organs/tissues with low resolution and poor boundaries. Unlike previous CXR computer-aided diagnosis works that focused on disease diagnosis/classification, we firstly propose a deep disentangled generative model (DGM) simultaneously generating abnormal disease residue maps and “radiorealistic” normal CXR images from an input abnormal CXR image. The intuition of our method is based on the assumption that disease regions usually superimpose upon or replace the pixels of normal tissues in an abnormal CXR. Thus, disease regions can be disentangled or decomposed from the abnormal CXR by comparing it with a generated patient-specific normal CXR. DGM consists of three encoder-decoder architecture branches: one for radiorealistic normal CXR image synthesis using adversarial learning, one for disease separation by generating a residue map to delineate the underlying abnormal region, and the other one for facilitating the training process and enhancing the model’s robustness on noisy data. A self-reconstruction loss is adopted in the first two branches to enforce the generated normal CXR image to preserve similar visual structures as the original CXR. We evaluated our model on a large-scale chest X-ray dataset. The results show that our model can generate disease residue/saliency maps (coherent with radiologist annotations) along with radiorealistic and patient specific normal CXR images. The disease residue/saliency map can be used by radiologists to improve the CXR reading efficiency in clinical practice. The synthesized normal CXR can be used for data augmentation and normal control of personalized longitudinal disease study. Furthermore, DGM quantitatively boosts the diagnosis performance on several important clinical applications, including normal/abnormal CXR classification, and lung opacity classification/detection.  相似文献   

10.
Deep learning-based breast lesion detection in ultrasound images has demonstrated great potential to provide objective suggestions for radiologists and improve their accuracy in diagnosing breast diseases. However, the lack of an effective feature enhancement approach limits the performance of deep learning models. Therefore, in this study, we propose a novel dual global attention neural network (DGANet) to improve the accuracy of breast lesion detection in ultrasound images. Specifically, we designed a bilateral spatial attention module and a global channel attention module to enhance features in spatial and channel dimensions, respectively. The bilateral spatial attention module enhances features by capturing supporting information in regions neighboring breast lesions and reducing integration of noise signal. The global channel attention module enhances features of important channels by weighted calculation, where the weights are decided by the learned interdependencies among all channels. To verify the performance of the DGANet, we conduct breast lesion detection experiments on our collected data set of 7040 ultrasound images and a public data set of breast ultrasound images. YOLOv3, RetinaNet, Faster R-CNN, YOLOv5, and YOLOX are used as comparison models. The results indicate that DGANet outperforms the comparison methods by 0.2%–5.9% in total mean average precision.  相似文献   

11.
病理图像自动分割是计算机辅助诊断技术的重要组成部分,可降低病理科医师工作负担,提高诊断效率和准确性。本文介绍一种结合多任务学习的半监督病理图像分割方法。该方法基于半监督的方式同时进行癌症区域图像分割与分类,即首先基于极少量像素级标注图像对分割网络进行训练,然后结合图像级标注图像同时完成图像分割及分类。在网络训练过程中,通过此2个任务的交替迭代以优化网络参数,降低了深度学习模型对图像标注的依赖性。在此基础上,模型引入了动态加权交叉熵损失函数,可利用分类预测概率值自动完成每个像素的权重分配,以提高分割网络对预测概率值较低目标区域的关注度。该策略可有效保持癌症区域的细节信息,经验证可在像素标注数据量不足的情况下对乳腺癌病理图像获得良好的癌症区域分割结果。  相似文献   

12.
In recent years, deep learning technology has shown superior performance in different fields of medical image analysis. Some deep learning architectures have been proposed and used for computational pathology classification, segmentation, and detection tasks. Due to their simple, modular structure, most downstream applications still use ResNet and its variants as the backbone network. This paper proposes a modular group attention block that can capture feature dependencies in medical images in two independent dimensions: channel and space. By stacking these group attention blocks in ResNet-style, we obtain a new ResNet variant called ResGANet. The stacked ResGANet architecture has 1.51–3.47 times fewer parameters than the original ResNet and can be directly used for downstream medical image segmentation tasks. Many experiments show that the proposed ResGANet is superior to state-of-the-art backbone models in medical image classification tasks. Applying it to different segmentation networks can improve the baseline model in medical image segmentation tasks without changing the network architecture. We hope that this work provides a promising method for enhancing the feature representation of convolutional neural networks (CNNs) in the future.  相似文献   

13.
The automated whole breast ultrasound (AWBUS) is a new breast imaging technique that can depict the whole breast anatomy. To facilitate the reading of AWBUS images and support the breast density estimation, an automatic breast anatomy segmentation method for AWBUS images is proposed in this study. The problem at hand is quite challenging as it needs to address issues of low image quality, ill-defined boundary, large anatomical variation, etc. To address these issues, a new deep learning encoder-decoder segmentation method based on a self-co-attention mechanism is developed. The self-attention mechanism is comprised of spatial and channel attention module (SC) and embedded in the ResNeXt (i.e., Res-SC) block in the encoder path. A non-local context block (NCB) is further incorporated to augment the learning of high-level contextual cues. The decoder path of the proposed method is equipped with the weighted up-sampling block (WUB) to attain class-specific better up-sampling effect. Meanwhile, the co-attention mechanism is also developed to improve the segmentation coherence among two consecutive slices. Extensive experiments are conducted with comparison to several the state-of-the-art deep learning segmentation methods. The experimental results corroborate the effectiveness of the proposed method on the difficult breast anatomy segmentation problem on AWBUS images.  相似文献   

14.
Accurate 3D segmentation of calf muscle compartments in volumetric MR images is essential to diagnose as well as assess progression of muscular diseases. Recently, good segmentation performance was achieved using state-of-the-art deep learning approaches, which, however, require large amounts of annotated data for training. Considering that obtaining sufficiently large medical image annotation datasets is often difficult, time-consuming, and requires expert knowledge, minimizing the necessary sizes of expert-annotated training datasets is of great importance. This paper reports CMC-Net, a new deep learning framework for calf muscle compartment segmentation in 3D MR images that selects an effective small subset of 2D slices from the 3D images to be labelled, while also utilizing unannotated slices to facilitate proper generalization of the subsequent training steps. Our model consists of three parts: (1) an unsupervised method to select the most representative 2D slices on which expert annotation is performed; (2) ensemble model training employing these annotated as well as additional unannotated 2D slices; (3) a model-tuning method using pseudo-labels generated by the ensemble model that results in a trained deep network capable of accurate 3D segmentations. Experiments on segmentation of calf muscle compartments in 3D MR images show that our new approach achieves good performance with very small annotation ratios, and when utilizing full annotation, it outperforms state-of-the-art full annotation segmentation methods. Additional experiments on a 3D MR thigh dataset further verify the ability of our method in segmenting leg muscle groups with sparse annotation.  相似文献   

15.
Because of its complicated structure, low signal/noise ratio, low contrast and blurry boundaries, fully automated segmentation of a breast ultrasound (BUS) image is a difficult task. In this paper, a novel segmentation method for BUS images without human intervention is proposed. Unlike most published approaches, the proposed method handles the segmentation problem by using a two-step strategy: ROI generation and ROI segmentation. First, a well-trained texture classifier categorizes the tissues into different classes, and the background knowledge rules are used for selecting the regions of interest (ROIs) from them. Second, a novel probability distance-based active contour model is applied for segmenting the ROIs and finding the accurate positions of the breast tumors. The active contour model combines both global statistical information and local edge information, using a level set approach. The proposed segmentation method was performed on 103 BUS images (48 benign and 55 malignant). To validate the performance, the results were compared with the corresponding tumor regions marked by an experienced radiologist. Three error metrics, true-positive ratio (TP), false-negative ratio (FN) and false-positive ratio (FP) were used for measuring the performance of the proposed method. The final results (TP = 91.31%, FN = 8.69% and FP = 7.26%) demonstrate that the proposed method can segment BUS images efficiently, quickly and automatically. (E-mail: hengda.cheng@usu.edu)  相似文献   

16.
医学影像是放射科医生做出医学诊断的重要依据。但随着医学影像技术的快速发展, 逐渐增多的影像图像和复杂的图像信息对医生的工作产生了巨大的挑战。而深度学习是人工智能研究中最热门的领域, 在处理大数据和提取有效信息方面具有优势, 因此逐渐成为分析医学影像方面的首选方法。本文阐述了深度学习的概念, 并简要总结深度学习在医学影像中的常见模型, 包括卷积神经网络、循环神经网络、深度置信网络和自动编码器。卷积神经网络的基本结构是卷积层、池化层和全连接层; 循环神经网络由输入层、隐藏层和输出层组成; 深度置信网络的基础是玻尔兹曼机; 自动编码器包含编码层、隐藏层和解码层。通过对CT肺结节和MRI脑部疾病的分类, 阐明目前深度学习在疾病自动分类上准确性较高; 通过分割左心室、椎旁肌肉和肝脏的结构, 可见深度学习方法在医学图像分割上与人为分割具有一致性; 深度学习在肺结节和乳腺癌疾病的检测上已相对成熟。但目前为止, 仍存在标注的样本量少和过拟合的问题, 希望通过共享图像数据库来解决此问题。总之, 深度学习在医学影像中具有广阔前景, 且对临床医生的工作具有重大意义。   相似文献   

17.
Automatic breast lesion segmentation in ultrasound helps to diagnose breast cancer, which is one of the dreadful diseases that affect women globally. Segmenting breast regions accurately from ultrasound image is a challenging task due to the inherent speckle artifacts, blurry breast lesion boundaries, and inhomogeneous intensity distributions inside the breast lesion regions. Recently, convolutional neural networks (CNNs) have demonstrated remarkable results in medical image segmentation tasks. However, the convolutional operations in a CNN often focus on local regions, which suffer from limited capabilities in capturing long-range dependencies of the input ultrasound image, resulting in degraded breast lesion segmentation accuracy. In this paper, we develop a deep convolutional neural network equipped with a global guidance block (GGB) and breast lesion boundary detection (BD) modules for boosting the breast ultrasound lesion segmentation. The GGB utilizes the multi-layer integrated feature map as a guidance information to learn the long-range non-local dependencies from both spatial and channel domains. The BD modules learn additional breast lesion boundary map to enhance the boundary quality of a segmentation result refinement. Experimental results on a public dataset and a collected dataset show that our network outperforms other medical image segmentation methods and the recent semantic segmentation methods on breast ultrasound lesion segmentation. Moreover, we also show the application of our network on the ultrasound prostate segmentation, in which our method better identifies prostate regions than state-of-the-art networks.  相似文献   

18.
Automated medical report generation in spine radiology, i.e., given spinal medical images and directly create radiologist-level diagnosis reports to support clinical decision making, is a novel yet fundamental study in the domain of artificial intelligence in healthcare. However, it is incredibly challenging because it is an extremely complicated task that involves visual perception and high-level reasoning processes. In this paper, we propose the neural-symbolic learning (NSL) framework that performs human-like learning by unifying deep neural learning and symbolic logical reasoning for the spinal medical report generation. Generally speaking, the NSL framework firstly employs deep neural learning to imitate human visual perception for detecting abnormalities of target spinal structures. Concretely, we design an adversarial graph network that interpolates a symbolic graph reasoning module into a generative adversarial network through embedding prior domain knowledge, achieving semantic segmentation of spinal structures with high complexity and variability. NSL secondly conducts human-like symbolic logical reasoning that realizes unsupervised causal effect analysis of detected entities of abnormalities through meta-interpretive learning. NSL finally fills these discoveries of target diseases into a unified template, successfully achieving a comprehensive medical report generation. When employed in a real-world clinical dataset, a series of empirical studies demonstrate its capacity on spinal medical report generation and show that our algorithm remarkably exceeds existing methods in the detection of spinal structures. These indicate its potential as a clinical tool that contributes to computer-aided diagnosis.  相似文献   

19.
Automatic fetal brain tissue segmentation can enhance the quantitative assessment of brain development at this critical stage. Deep learning methods represent the state of the art in medical image segmentation and have also achieved impressive results in brain segmentation. However, effective training of a deep learning model to perform this task requires a large number of training images to represent the rapid development of the transient fetal brain structures. On the other hand, manual multi-label segmentation of a large number of 3D images is prohibitive. To address this challenge, we segmented 272 training images, covering 19–39 gestational weeks, using an automatic multi-atlas segmentation strategy based on deformable registration and probabilistic atlas fusion, and manually corrected large errors in those segmentations. Since this process generated a large training dataset with noisy segmentations, we developed a novel label smoothing procedure and a loss function to train a deep learning model with smoothed noisy segmentations. Our proposed methods properly account for the uncertainty in tissue boundaries. We evaluated our method on 23 manually-segmented test images of a separate set of fetuses. Results show that our method achieves an average Dice similarity coefficient of 0.893 and 0.916 for the transient structures of younger and older fetuses, respectively. Our method generated results that were significantly more accurate than several state-of-the-art methods including nnU-Net that achieved the closest results to our method. Our trained model can serve as a valuable tool to enhance the accuracy and reproducibility of fetal brain analysis in MRI.  相似文献   

20.
目的 探讨基于乳腺超声动态连续影像的深度学习模型建立方法并对其效能进行初步验证。方法 对506例女性进行乳腺超声扫查,存储实时动态图像,导入深睿影像智能分析平台,采用基于深度学习的端到端的肿块检出网络对原始动态序列图像进行分析提取,训练建立最优化深度学习模型,并对模型的效能进行测试验证,数据采用Python3.6软件进行统计分析。结果 单帧乳腺超声影像的肿块检出敏感率(0.1、0.2、0.5/scan)为76.6%、84.2%、86.0%,序列乳腺超声影像的肿块检出敏感率(0.1、0.2、0.5/scan)为77.3%、91.8%、95.3%;0.1/scan,单帧乳腺超声影像的肿块检出与序列乳腺超声影像的肿块检出无统计学意义(P >0.05),0.2/scan,单帧乳腺超声影像的肿块检出与序列乳腺超声影像的肿块检出有统计学意义(P <0.05),0.5/scan, 单帧乳腺超声影像的肿块检出与序列乳腺超声影像的肿块检出有统计学意义(P <0.05)。结论 基于乳腺超声动态连续影像的深度学习模型能提高乳腺超声影像的肿块检出率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号