首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 62 毫秒
1.
基于阈值分割和Snake模型的弱边缘医学超声图像自动分割   总被引:1,自引:1,他引:0  
医学超声图像分割是图像处理中的一项关键技术.文章以胆结石超声图像为例,介绍一种新的弱边缘超声图像自动分割算法.首先采用基于直方图凹度分析的闽值分割方法确定Snake模型的初始蛇,再基于Snake模型结合贪婪算法对图像进行目标分割.实验结果表明该算法对弱边缘现象较为严重的医学超声图像进行目标分割时,定位准确,分割效果良好,足一种全自动的超声医学图像分割方法.  相似文献   

2.
图像分割在医学图像中的研究方法及应用   总被引:6,自引:1,他引:6  
图像分割是指将一幅图像分解为若干互不交迭区域的集合,是图像处理与机器视觉的基本问题之一.医学图像分割是图像分割的一个重要应用领域,也是一个经典难题.本文从应用的特定角度,对近年来医学图像分割的新方法或改进算法进行综述,并简要讨论了每类分割方法的特点及应用.  相似文献   

3.
子宫及子宫肌瘤分割是治疗肌瘤的关键,目前大部分分割工作还是由医生手动完成,为了提升医生工作效率,提高分割精度,越来越多人开始研究自动分割技术以减轻医生工作量。为了探究子宫及子宫肌瘤自动分割的研究现状,本文综述了近年来各种子宫及子宫肌瘤图像分割的方法,包括聚类、水平集等一些传统分割方法,也囊括了最新的深度学习分割方法,文章的最后对子宫及子宫肌瘤自动分割前景进行了展望。  相似文献   

4.
基于亮度不变的医学超声图像对比度增强方法   总被引:4,自引:5,他引:4  
目的探求一种基于亮度不变的医学超声图像对比度增强方法。方法在改进常规直方图均衡化算法的基础上,以最小灰度平均值误差二元直方图均衡化为基础,引入另一个有效分割点。结果实现了一种局部直方图均衡化算法亮度保持三元双向直方图均衡化。结论此算法较其他直方图增强方法更好地抑制了医学超声图像黑色背景过增强,保留了原图像细节信息,保持了原图像亮度,提高了图像对比度。  相似文献   

5.
背景:由于人体解剖结构的复杂性、组织器官形状的不规则性及不同个体间的差异性,所以比较适合用多重分形来分析.目的:采用多重分形理论对医学图像进行图像分割.方法:采用基于容量测度的多重分形谱计算及基于概率测度的多重分形谱计算方法对图像进行分割.对于待处理图片分别进行传统的区域生长分割,max容量测度图像分割,sum容量测度图像分割,概率测度图像分割等4种分割,并加入噪声后再进行同样的分割处理作为比较.结果与结论:采用的两种基于多重分形谱的计算法中,基于容量测量的多重分形谱计算方法的关键是定义合适的测度μα;基于概率测度的多重分形谱计算方法的关键是定义合适的归一化概率Pi,不同的测度(概率)和不同的阈值对结果的影像比较大.基于概率测度的方法对噪声比较敏感,但是在滤过噪声时对图像象素大小变化比较大、比较复杂的图像有较好的分割效果.实验表明基于多重分形谱的医学图像分割方法在选择合适的测度(概率)和阈值时是可行的,特别是在较为复杂的图像处理中对于纹理和边缘的区别上有较大的优势,在准确地分割的同时能保留更多的细节,具有重要的实际意义.同时,多重分形也可以作为一种图像的特征,为特征提取多提供一种有力的数据.  相似文献   

6.
背景:医学图像的三维重建在医疗诊断、实验分析中起着越来越重要的作用,它是一项复杂的任务,其中目标图像的分割是首要且重要的一步。目的:探索对颈动脉MR图像的图像分割及三维重建方法,并探讨三维模型在颈动脉斑块定位中的应用。方法:选择3DTOF序列图像对其进行基于最大熵原理的阈值分割,并与普通方法的结果做比较;进一步用数学形态学分割方法提取出颈动脉;进行三维重建,利用三维模型进行斑块的初步定位。结果与结论:基于最大熵原理的阈值分割适于对颈动脉3DTOF序列图像的分割,用数学形态学分割方法进行后续分割可得到目标图像。三维重建后的模型对于斑块定位有辅助作用。  相似文献   

7.
目的随着医学图像数据的急剧增长,建立从医学图像中自动分割特定解剖结构的算法。方法首先,获取的脑图像体数据集通过与参考体数据集的配准,使对应层图像包含与参考数据相似的解剖结构;然后利用训练得到的统计形状模型自动定位、分割指定的解剖结构。结果实验表明这种算法能取得良好的分割结果。结论本文提出的基于互信息的图像配准和统计形状模型的分割算法,能够实现从体数据中自动定位解剖结构所在的图像位置并分割出目标结构。  相似文献   

8.
背景:基于马尔科夫随机场的图像分割算法已经成为医学图像分割的重要方法,其中,Gibbs场先验参数的取值对分割精度有很大的影响.目的:根据脑部MR图像的成像特点,探讨Gibbs场先验参数的估计方法,从而提高图像分割的精度.方法:通过对脑部MR图像的统计分析,得到图像高斯噪声的方差与Gibbs场先验参数的对应关系.然后在基于马尔可夫随机场图像分割算法的迭代过程中,根据高斯分布的方差估计值,用插值方法估计Gibbs场先验参数.结果与结论:通过对模拟脑部MR图像和临床脑部MR图像分割实验,表明该方法比传统的设定Gibbs场先验参数为某一常数的方法有更精确的图像分割能力,并且实现了图像的自适应分割,具有方法简单、运算速度快、稳健性好的特点.  相似文献   

9.
目前,医学图像作为临床检测以及放疗引导的重要参考依据,在医学的发展中起着关键作用。医学图像主要包括计算机断层扫描(CT)、核磁共振(MRI)、X射线、超声(US)等,超声相对前三者价格较低,对软组织成像效果较好且对人体基本无伤害,在现阶段应用已越来越广泛。超声图像分割对后期图像分析有很大的作用,可以给临床诊断及放疗摆位等提供一定的参考,本文就超声图像的分割的传统方法、基于形变模型的分割方法和结合深度学习方法的研究情况进行阐述。  相似文献   

10.
医学图像主要包括CT、MRI、X线、超声等,其中超声检查价格低,对软组织成像效果好,对人体基本无伤害,目前临床已广泛应用。超声图像分割对后期图像分析有很大作用,可为临床诊断及放疗摆位等提供参考。本文就超声图像分割的传统方法、基于形变模型的分割方法及结合深度学习方法的研究进展进行综述。  相似文献   

11.
Supervised deep learning needs a large amount of labeled data to achieve high performance. However, in medical imaging analysis, each site may only have a limited amount of data and labels, which makes learning ineffective. Federated learning (FL) can learn a shared model from decentralized data. But traditional FL requires fully-labeled data for training, which is very expensive to obtain. Self-supervised contrastive learning (CL) can learn from unlabeled data for pre-training, followed by fine-tuning with limited annotations. However, when adopting CL in FL, the limited data diversity on each site makes federated contrastive learning (FCL) ineffective. In this work, we propose two federated self-supervised learning frameworks for volumetric medical image segmentation with limited annotations. The first one features high accuracy and fits high-performance servers with high-speed connections. The second one features lower communication costs, suitable for mobile devices. In the first framework, features are exchanged during FCL to provide diverse contrastive data to each site for effective local CL while keeping raw data private. Global structural matching aligns local and remote features for a unified feature space among different sites. In the second framework, to reduce the communication cost for feature exchanging, we propose an optimized method FCLOpt that does not rely on negative samples. To reduce the communications of model download, we propose the predictive target network update (PTNU) that predicts the parameters of the target network. Based on PTNU, we propose the distance prediction (DP) to remove most of the uploads of the target network. Experiments on a cardiac MRI dataset show the proposed two frameworks substantially improve the segmentation and generalization performance compared with state-of-the-art techniques.  相似文献   

12.
Semi-supervised learning has greatly advanced medical image segmentation since it effectively alleviates the need of acquiring abundant annotations from experts, wherein the mean-teacher model, known as a milestone of perturbed consistency learning, commonly serves as a standard and simple baseline. Inherently, learning from consistency can be regarded as learning from stability under perturbations. Recent improvement leans toward more complex consistency learning frameworks, yet, little attention is paid to the consistency target selection. Considering that the ambiguous regions from unlabeled data contain more informative complementary clues, in this paper, we improve the mean-teacher model to a novel ambiguity-consensus mean-teacher (AC-MT) model. Particularly, we comprehensively introduce and benchmark a family of plug-and-play strategies for ambiguous target selection from the perspectives of entropy, model uncertainty and label noise self-identification, respectively. Then, the estimated ambiguity map is incorporated into the consistency loss to encourage consensus between the two models’ predictions in these informative regions. In essence, our AC-MT aims to find out the most worthwhile voxel-wise targets from the unlabeled data, and the model especially learns from the perturbed stability of these informative regions. The proposed methods are extensively evaluated on left atrium segmentation and brain tumor segmentation. Encouragingly, our strategies bring substantial improvement over recent state-of-the-art methods. The ablation study further demonstrates our hypothesis and shows impressive results under various extreme annotation conditions.  相似文献   

13.
Deep convolutional neural networks (CNNs) have been widely used for medical image segmentation. In most studies, only the output layer is exploited to compute the final segmentation results and the hidden representations of the deep learned features have not been well understood. In this paper, we propose a prototype segmentation (ProtoSeg) method to compute a binary segmentation map based on deep features. We measure the segmentation abilities of the features by computing the Dice between the feature segmentation map and ground-truth, named as the segmentation ability score (SA score for short). The corresponding SA score can quantify the segmentation abilities of deep features in different layers and units to understand the deep neural networks for segmentation. In addition, our method can provide a mean SA score which can give a performance estimation of the output on the test images without ground-truth. Finally, we use the proposed ProtoSeg method to compute the segmentation map directly on input images to further understand the segmentation ability of each input image. Results are presented on segmenting tumors in brain MRI, lesions in skin images, COVID-related abnormality in CT images, prostate segmentation in abdominal MRI, and pancreatic mass segmentation in CT images. Our method can provide new insights for interpreting and explainable AI systems for medical image segmentation. Our code is available on: https://github.com/shengfly/ProtoSeg.  相似文献   

14.
Despite recent progress of automatic medical image segmentation techniques, fully automatic results usually fail to meet clinically acceptable accuracy, thus typically require further refinement. To this end, we propose a novel Volumetric Memory Network, dubbed as VMN, to enable segmentation of 3D medical images in an interactive manner. Provided by user hints on an arbitrary slice, a 2D interaction network is firstly employed to produce an initial 2D segmentation for the chosen slice. Then, the VMN propagates the initial segmentation mask bidirectionally to all slices of the entire volume. Subsequent refinement based on additional user guidance on other slices can be incorporated in the same manner. To facilitate smooth human-in-the-loop segmentation, a quality assessment module is introduced to suggest the next slice for interaction based on the segmentation quality of each slice produced in the previous round. Our VMN demonstrates two distinctive features: First, the memory-augmented network design offers our model the ability to quickly encode past segmentation information, which will be retrieved later for the segmentation of other slices; Second, the quality assessment module enables the model to directly estimate the quality of each segmentation prediction, which allows for an active learning paradigm where users preferentially label the lowest-quality slice for multi-round refinement. The proposed network leads to a robust interactive segmentation engine, which can generalize well to various types of user annotations (e.g., scribble, bounding box, extreme clicking). Extensive experiments have been conducted on three public medical image segmentation datasets (i.e., MSD, KiTS19, CVC-ClinicDB), and the results clearly confirm the superiority of our approach in comparison with state-of-the-art segmentation models. The code is made publicly available at https://github.com/0liliulei/Mem3D.  相似文献   

15.
Over the last decade, convolutional neural networks have emerged and advanced the state-of-the-art in various image analysis and computer vision applications. The performance of 2D image classification networks is constantly improving and being trained on databases made of millions of natural images. Conversely, in the field of medical image analysis, the progress is also remarkable but has mainly slowed down due to the relative lack of annotated data and besides, the inherent constraints related to the acquisition process. These limitations are even more pronounced given the volumetry of medical imaging data. In this paper, we introduce an efficient way to transfer the efficiency of a 2D classification network trained on natural images to 2D, 3D uni- and multi-modal medical image segmentation applications. In this direction, we designed novel architectures based on two key principles: weight transfer by embedding a 2D pre-trained encoder into a higher dimensional U-Net, and dimensional transfer by expanding a 2D segmentation network into a higher dimension one. The proposed networks were tested on benchmarks comprising different modalities: MR, CT, and ultrasound images. Our 2D network ranked first on the CAMUS challenge dedicated to echo-cardiographic data segmentation and surpassed the state-of-the-art. Regarding 2D/3D MR and CT abdominal images from the CHAOS challenge, our approach largely outperformed the other 2D-based methods described in the challenge paper on Dice, RAVD, ASSD, and MSSD scores and ranked third on the online evaluation platform. Our 3D network applied to the BraTS 2022 competition also achieved promising results, reaching an average Dice score of 91.69% (91.22%) for the whole tumor, 83.23% (84.77%) for the tumor core and 81.75% (83.88%) for enhanced tumor using the approach based on weight (dimensional) transfer. Experimental and qualitative results illustrate the effectiveness of our methods for multi-dimensional medical image segmentation.  相似文献   

16.
Traditional medical image segmentation methods based on deep learning require experts to provide extensive manual delineations for model training. Few-shot learning aims to reduce the dependence on the scale of training data but usually shows poor generalizability to the new target. The trained model tends to favor the training classes rather than being absolutely class-agnostic. In this work, we propose a novel two-branch segmentation network based on unique medical prior knowledge to alleviate the above problem. Specifically, we explicitly introduce a spatial branch to provide the spatial information of the target. In addition, we build a segmentation branch based on the classical encoder–decoder structure in supervised learning and integrate prototype similarity and spatial information as prior knowledge. To achieve effective information integration, we propose an attention-based fusion module (AF) that enables the content interaction of decoder features and prior knowledge. Experiments on an echocardiography dataset and an abdominal MRI dataset show that the proposed model achieves substantial improvements over state-of-the-art methods. Moreover, some results are comparable to those of the fully supervised model. The source code is available at github.com/warmestwind/RAPNet.  相似文献   

17.
We present our novel deep multi-task learning method for medical image segmentation. Existing multi-task methods demand ground truth annotations for both the primary and auxiliary tasks. Contrary to it, we propose to generate the pseudo-labels of an auxiliary task in an unsupervised manner. To generate the pseudo-labels, we leverage Histogram of Oriented Gradients (HOGs), one of the most widely used and powerful hand-crafted features for detection. Together with the ground truth semantic segmentation masks for the primary task and pseudo-labels for the auxiliary task, we learn the parameters of the deep network to minimize the loss of both the primary task and the auxiliary task jointly. We employed our method on two powerful and widely used semantic segmentation networks: UNet and U2Net to train in a multi-task setup. To validate our hypothesis, we performed experiments on two different medical image segmentation data sets. From the extensive quantitative and qualitative results, we observe that our method consistently improves the performance compared to the counter-part method. Moreover, our method is the winner of FetReg Endovis Sub-challenge on Semantic Segmentation organised in conjunction with MICCAI 2021. Code and implementation details are available at:https://github.com/thetna/medical_image_segmentation.  相似文献   

18.
High performance of deep learning models on medical image segmentation greatly relies on large amount of pixel-wise annotated data, yet annotations are costly to collect. How to obtain high accuracy segmentation labels of medical images with limited cost (e.g. time) becomes an urgent problem. Active learning can reduce the annotation cost of image segmentation, but it faces three challenges: the cold start problem, an effective sample selection strategy for segmentation task and the burden of manual annotation. In this work, we propose a Hybrid Active Learning framework using Interactive Annotation (HAL-IA) for medical image segmentation, which reduces the annotation cost both in decreasing the amount of the annotated images and simplifying the annotation process. Specifically, we propose a novel hybrid sample selection strategy to select the most valuable samples for segmentation model performance improvement. This strategy combines pixel entropy, regional consistency and image diversity to ensure that the selected samples have high uncertainty and diversity. In addition, we propose a warm-start initialization strategy to build the initial annotated dataset to avoid the cold-start problem. To simplify the manual annotation process, we propose an interactive annotation module with suggested superpixels to obtain pixel-wise label with several clicks. We validate our proposed framework with extensive segmentation experiments on four medical image datasets. Experimental results showed that the proposed framework achieves high accuracy pixel-wise annotations and models with less labeled data and fewer interactions, outperforming other state-of-the-art methods. Our method can help physicians efficiently obtain accurate medical image segmentation results for clinical analysis and diagnosis.  相似文献   

19.
Unsupervised domain adaptation (UDA) has been a vital protocol for migrating information learned from a labeled source domain to facilitate the implementation in an unlabeled heterogeneous target domain. Although UDA is typically jointly trained on data from both domains, accessing the labeled source domain data is often restricted, due to concerns over patient data privacy or intellectual property. To sidestep this, we propose “off-the-shelf (OS)” UDA (OSUDA), aimed at image segmentation, by adapting an OS segmentor trained in a source domain to a target domain, in the absence of source domain data in adaptation. Toward this goal, we aim to develop a novel batch-wise normalization (BN) statistics adaptation framework. In particular, we gradually adapt the domain-specific low-order BN statistics, e.g., mean and variance, through an exponential momentum decay strategy, while explicitly enforcing the consistency of the domain shareable high-order BN statistics, e.g., scaling and shifting factors, via our optimization objective. We also adaptively quantify the channel-wise transferability to gauge the importance of each channel, via both low-order statistics divergence and a scaling factor. Furthermore, we incorporate unsupervised self-entropy minimization into our framework to boost performance alongside a novel queued, memory-consistent self-training strategy to utilize the reliable pseudo label for stable and efficient unsupervised adaptation. We evaluated our OSUDA-based framework on both cross-modality and cross-subtype brain tumor segmentation and cardiac MR to CT segmentation tasks. Our experimental results showed that our memory consistent OSUDA performs better than existing source-relaxed UDA methods and yields similar performance to UDA methods with source data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号