首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 468 毫秒
1.
In this paper, we introduce a method to automatically produce plausible image segmentation samples from a single expert segmentation. A probability distribution of image segmentation boundaries is defined as a Gaussian process, which leads to segmentations which are spatially coherent and consistent with the presence of salient borders in the image. The proposed approach is computationally efficient, and generates visually plausible samples. The variability between the samples is mainly governed by a parameter which may be correlated with a simple Dice’s coefficient, or easily set by the user from the definition of probable regions of interest. The method is extended to the case of several neighboring structures, but also to account for under or over segmentation, and the presence of excluded regions. We also detail a method to sample segmentations with more general non-stationary covariance functions which relies on supervoxels. Furthermore, we compare the generated segmentation samples with several manual clinical segmentations of a brain tumor. Finally, we show how this approach can have useful applications in the field of uncertainty quantification, and an illustration is provided in radiotherapy planning, where segmentation sampling is applied to both the clinical target volume and the organs at risk.  相似文献   

2.
We propose a simple but generally applicable approach to improving the accuracy of automatic image segmentation algorithms relative to manual segmentations. The approach is based on the hypothesis that a large fraction of the errors produced by automatic segmentation are systematic, i.e., occur consistently from subject to subject, and serves as a wrapper method around a given host segmentation method. The wrapper method attempts to learn the intensity, spatial and contextual patterns associated with systematic segmentation errors produced by the host method on training data for which manual segmentations are available. The method then attempts to correct such errors in segmentations produced by the host method on new images. One practical use of the proposed wrapper method is to adapt existing segmentation tools, without explicit modification, to imaging data and segmentation protocols that are different from those on which the tools were trained and tuned. An open-source implementation of the proposed wrapper method is provided, and can be applied to a wide range of image segmentation problems. The wrapper method is evaluated with four host brain MRI segmentation methods: hippocampus segmentation using FreeSurfer (Fischl et al., 2002); hippocampus segmentation using multi-atlas label fusion (Artaechevarria et al., 2009); brain extraction using BET (Smith, 2002); and brain tissue segmentation using FAST (Zhang et al., 2001). The wrapper method generates 72%, 14%, 29% and 21% fewer erroneously segmented voxels than the respective host segmentation methods. In the hippocampus segmentation experiment with multi-atlas label fusion as the host method, the average Dice overlap between reference segmentations and segmentations produced by the wrapper method is 0.908 for normal controls and 0.893 for patients with mild cognitive impairment. Average Dice overlaps of 0.964, 0.905 and 0.951 are obtained for brain extraction, white matter segmentation and gray matter segmentation, respectively.  相似文献   

3.
Quantitative neuroimaging analyses often rely on the accurate segmentation of anatomical brain structures. In contrast to manual segmentation, automatic methods offer reproducible outputs and provide scalability to study large databases. Among existing approaches, multi-atlas segmentation has recently shown to yield state-of-the-art performance in automatic segmentation of brain images. It consists in propagating the labelmaps from a set of atlases to the anatomy of a target image using image registration, and then fusing these multiple warped labelmaps into a consensus segmentation on the target image. Accurately estimating the contribution of each atlas labelmap to the final segmentation is a critical step for the success of multi-atlas segmentation. Common approaches to label fusion either rely on local patch similarity, probabilistic statistical frameworks or a combination of both. In this work, we propose a probabilistic label fusion framework based on atlas label confidences computed at each voxel of the structure of interest. Maximum likelihood atlas confidences are estimated using a supervised approach, explicitly modeling the relationship between local image appearances and segmentation errors produced by each of the atlases. We evaluate different spatial pooling strategies for modeling local segmentation errors. We also present a novel type of label-dependent appearance features based on atlas labelmaps that are used during confidence estimation to increase the accuracy of our label fusion. Our approach is evaluated on the segmentation of seven subcortical brain structures from the MICCAI 2013 SATA Challenge dataset and the hippocampi from the ADNI dataset. Overall, our results indicate that the proposed label fusion framework achieves superior performance to state-of-the-art approaches in the majority of the evaluated brain structures and shows more robustness to registration errors.  相似文献   

4.
Statistical shape models learned from a population of previously observed training shapes are nowadays widely used in medical image analysis to aid segmentation or classification. However, providing an appropriate and representative training population of preferably manual segmentations is typically either very labor-intensive or even impossible. Therefore, statistical shape models in practice frequently suffer from the high-dimension-low-sample-size (HDLSS) problem resulting in models with insufficient expressiveness.In this paper, a novel approach for learning representative multi-resolution multi-object statistical shape models from a small number of training samples that adequately model the variability of each individual object as well as their interrelations is presented. The method is based on the assumption of locality, which means that local shape variations have limited effects in distant areas and, therefore, can be modeled independently. This locality assumption is integrated into the standard statistical shape modeling framework by manipulating the sample covariance matrix (non-zero covariances between distant landmarks are set to zero). To allow for multi-object modeling, a method for computing distances between points located on different object shapes is proposed. Furthermore, different levels of locality are introduced by deriving a multi-resolution scheme, which is equipped with a method to combine variability information modeled at different levels into a single shape model. This combined representation of global and local variability in a single shape model allows the use of the classical active shape model strategy for model-based image segmentation.An extensive evaluation based on a public data base of 247 chest radiographs is performed to show the modeling and segmentation capabilities of the proposed approach in single- and multi-object HDLSS scenarios. The new approach is not only compared to the classical shape modeling method but also to three state-of-the-art shape modeling approaches specifically designed to cope with the HDLSS problem. The results show that the new approach significantly outperforms all other approaches in terms of generalization ability and model-based segmentation accuracy.  相似文献   

5.
An important challenge and limiting factor in deep learning methods for medical imaging segmentation is the lack of available of annotated data to properly train models. For the specific task of tumor segmentation, the process entails clinicians labeling every slice of volumetric scans for every patient, which becomes prohibitive at the scale of datasets required to train neural networks to optimal performance. To address this, we propose a novel semi-supervised framework that allows training any segmentation (encoder–decoder) model using only information readily available in radiological data, namely the presence of a tumor in the image, in addition to a few annotated images. Specifically, we conjecture that a generative model performing domain translation on this weak label — healthy vs diseased scans — helps achieve tumor segmentation. The proposed GenSeg method first disentangles tumoral tissue from healthy “background” tissue. The latent representation is separated into (1) the common background information across both domains, and (2) the unique tumoral information. GenSeg then achieves diseased-to-healthy image translation by decoding a healthy version of the image from just the common representation, as well as a residual image that allows adding back the tumors. The same decoder that produces this residual tumor image, also outputs a tumor segmentation. Implicit data augmentation is achieved by re-using the same framework for healthy-to-diseased image translation, where a residual tumor image is produced from a prior distribution. By performing both image translation and segmentation simultaneously, GenSeg allows training on only partially annotated datasets. To test the framework, we trained U-Net-like architectures using GenSeg and evaluated their performance on 3 variants of a synthetic task, as well as on 2 benchmark datasets: brain tumor segmentation in MRI (derived from BraTS) and liver metastasis segmentation in CT (derived from LiTS). Our method outperforms the baseline semi-supervised (autoencoder and mean teacher) and supervised segmentation methods, with improvements ranging between 8–14% Dice score on the brain task and 5–8% on the liver task, when only 1% of the training images were annotated. These results show the proposed framework is ideal at addressing the problem of training deep segmentation models when a large portion of the available data is unlabeled and unpaired, a common issue in tumor segmentation.  相似文献   

6.
7.
In this paper, we propose and validate a deep learning framework that incorporates both multi-atlas registration and level-set for segmenting pancreas from CT volume images. The proposed segmentation pipeline consists of three stages, namely coarse, fine, and refine stages. Firstly, a coarse segmentation is obtained through multi-atlas based 3D diffeomorphic registration and fusion. After that, to learn the connection feature, a 3D patch-based convolutional neural network (CNN) and three 2D slice-based CNNs are jointly used to predict a fine segmentation based on a bounding box determined from the coarse segmentation. Finally, a 3D level-set method is used, with the fine segmentation being one of its constraints, to integrate information of the original image and the CNN-derived probability map to achieve a refine segmentation. In other words, we jointly utilize global 3D location information (registration), contextual information (patch-based 3D CNN), shape information (slice-based 2.5D CNN) and edge information (3D level-set) in the proposed framework. These components form our cascaded coarse-fine-refine segmentation framework. We test the proposed framework on three different datasets with varying intensity ranges obtained from different resources, respectively containing 36, 82 and 281 CT volume images. In each dataset, we achieve an average Dice score over 82%, being superior or comparable to other existing state-of-the-art pancreas segmentation algorithms.  相似文献   

8.
《Medical image analysis》2015,20(1):98-109
Multi-atlas segmentation infers the target image segmentation by combining prior anatomical knowledge encoded in multiple atlases. It has been quite successfully applied to medical image segmentation in the recent years, resulting in highly accurate and robust segmentation for many anatomical structures. However, to guide the label fusion process, most existing multi-atlas segmentation methods only utilise the intensity information within a small patch during the label fusion process and may neglect other useful information such as gradient and contextual information (the appearance of surrounding regions). This paper proposes to combine the intensity, gradient and contextual information into an augmented feature vector and incorporate it into multi-atlas segmentation. Also, it explores the alternative to the K nearest neighbour (KNN) classifier in performing multi-atlas label fusion, by using the support vector machine (SVM) for label fusion instead. Experimental results on a short-axis cardiac MR data set of 83 subjects have demonstrated that the accuracy of multi-atlas segmentation can be significantly improved by using the augmented feature vector. The mean Dice metric of the proposed segmentation framework is 0.81 for the left ventricular myocardium on this data set, compared to 0.79 given by the conventional multi-atlas patch-based segmentation (Coupé et al., 2011; Rousseau et al., 2011). A major contribution of this paper is that it demonstrates that the performance of non-local patch-based segmentation can be improved by using augmented features.  相似文献   

9.
In this paper, we present a new Deep Convolutional Neural Networks (CNNs) dedicated to fully automatic segmentation of Glioblastoma brain tumors with high- and low-grade. The proposed CNNs model is inspired by the Occipito-Temporal pathway which has a special function called selective attention that uses different receptive field sizes in successive layers to figure out the crucial objects in a scene. Thus, using selective attention technique to develop the CNNs model, helps to maximize the extraction of relevant features from MRI images. We have also addressed two more issues: class-imbalance, and the spatial relationship among image Patches. To address the first issue, we propose two steps: an equal sampling of images Patches and an experimental analysis of the effect of weighted cross-entropy loss function on the segmentation results. In addition, to overcome the second issue, we have studied the effect of Overlapping Patches against Adjacent Patches where the Overlapping Patches show better segmentation results due to the introduction of the global context as well as the local features of the image Patches compared to the conventionnel Adjacent Patches. Our experiment results are reported on BRATS-2018 dataset where our End-to-End Deep Learning model achieved state-of-the-art performance. The median Dice score of our fully automatic segmentation model is 0.90, 0.83, 0.83 for the whole tumor, tumor core, and enhancing tumor respectively compared to the Dice score of radiologist, that is in the range 74% – 85%. Moreover, our proposed CNNs model is not only computationally efficient at inference time, but it could segment the whole brain on average 12 seconds. Finally, the proposed Deep Learning model provides an accurate and reliable segmentation result, and that makes it suitable for adopting in research and as a part of different clinical settings.  相似文献   

10.
A Bayesian model of shape and appearance for subcortical brain segmentation   总被引:2,自引:0,他引:2  
Automatic segmentation of subcortical structures in human brain MR images is an important but difficult task due to poor and variable intensity contrast. Clear, well-defined intensity features are absent in many places along typical structure boundaries and so extra information is required to achieve successful segmentation. A method is proposed here that uses manually labelled image data to provide anatomical training information. It utilises the principles of the Active Shape and Appearance Models but places them within a Bayesian framework, allowing probabilistic relationships between shape and intensity to be fully exploited. The model is trained for 15 different subcortical structures using 336 manually-labelled T1-weighted MR images. Using the Bayesian approach, conditional probabilities can be calculated easily and efficiently, avoiding technical problems of ill-conditioned covariance matrices, even with weak priors, and eliminating the need for fitting extra empirical scaling parameters, as is required in standard Active Appearance Models. Furthermore, differences in boundary vertex locations provide a direct, purely local measure of geometric change in structure between groups that, unlike voxel-based morphometry, is not dependent on tissue classification methods or arbitrary smoothing. In this paper the fully-automated segmentation method is presented and assessed both quantitatively, using Leave-One-Out testing on the 336 training images, and qualitatively, using an independent clinical dataset involving Alzheimer's disease. Median Dice overlaps between 0.7 and 0.9 are obtained with this method, which is comparable or better than other automated methods. An implementation of this method, called FIRST, is currently distributed with the freely-available FSL package.  相似文献   

11.
Deep convolutional neural networks (CNNs) have been widely used for medical image segmentation. In most studies, only the output layer is exploited to compute the final segmentation results and the hidden representations of the deep learned features have not been well understood. In this paper, we propose a prototype segmentation (ProtoSeg) method to compute a binary segmentation map based on deep features. We measure the segmentation abilities of the features by computing the Dice between the feature segmentation map and ground-truth, named as the segmentation ability score (SA score for short). The corresponding SA score can quantify the segmentation abilities of deep features in different layers and units to understand the deep neural networks for segmentation. In addition, our method can provide a mean SA score which can give a performance estimation of the output on the test images without ground-truth. Finally, we use the proposed ProtoSeg method to compute the segmentation map directly on input images to further understand the segmentation ability of each input image. Results are presented on segmenting tumors in brain MRI, lesions in skin images, COVID-related abnormality in CT images, prostate segmentation in abdominal MRI, and pancreatic mass segmentation in CT images. Our method can provide new insights for interpreting and explainable AI systems for medical image segmentation. Our code is available on: https://github.com/shengfly/ProtoSeg.  相似文献   

12.
We describe an automated 3-D segmentation system for in vivo brain magnetic resonance images (MRI). Our segmentation method combines a variety of filtering, segmentation, and registration techniques and makes maximum use of the available a priori biomedical expertise, both in an implicit and an explicit form. We approach the issue of boundary finding as a process of fitting a group of deformable templates (simplex mesh surfaces) to the contours of the target structures. These templates evolve in parallel, supervised by a series of rules derived from analyzing the template's dynamics and from medical experience. The templates are also constrained by knowledge on the expected textural and shape properties of the target structures. We apply our system to segment four brain structures (corpus callosum, ventricles, hippocampus, and caudate nuclei) and discuss its robustness to imaging characteristics and acquisition noise.  相似文献   

13.
Atlas-based segmentation is a powerful generic technique for automatic delineation of structures in volumetric images. Several studies have shown that multi-atlas segmentation methods outperform schemes that use only a single atlas, but running multiple registrations on volumetric data is time-consuming. Moreover, for many scans or regions within scans, a large number of atlases may not be required to achieve good segmentation performance and may even deteriorate the results. It would therefore be worthwhile to include the decision which and how many atlases to use for a particular target scan in the segmentation process. To this end, we propose two generally applicable multi-atlas segmentation methods, adaptive multi-atlas segmentation (AMAS) and adaptive local multi-atlas segmentation (ALMAS). AMAS automatically selects the most appropriate atlases for a target image and automatically stops registering atlases when no further improvement is expected. ALMAS takes this concept one step further by locally deciding how many and which atlases are needed to segment a target image. The methods employ a computationally cheap atlas selection strategy, an automatic stopping criterion, and a technique to locally inspect registration results and determine how much improvement can be expected from further registrations.AMAS and ALMAS were applied to segmentation of the heart in computed tomography scans of the chest and compared to a conventional multi-atlas method (MAS). The results show that ALMAS achieves the same performance as MAS at a much lower computational cost. When the available segmentation time is fixed, both AMAS and ALMAS perform significantly better than MAS. In addition, AMAS was applied to an online segmentation challenge for delineation of the caudate nucleus in brain MRI scans where it achieved the best score of all results submitted to date.  相似文献   

14.
为了正确诊断肺癌转移,本文应用深度学习技术对肺癌患者颈部淋巴结超声图像病灶区域分割,提出了一种用于超声图像分割的级联注意力UNet网络,该级联结构是将注意力UNet与EfficientNet相结合的二阶段分割网络,第一阶段为粗分割,第二阶段为细分割,编码器采用EfficientNet-B5作为主干网,图像多尺度输入;提出了适用于小目标、小样本场景的新损失函数;实验结果表明,该文提出的级联结构网络在颈部淋巴结超声图像分割中网络性能优异,Dice系数达到0.95,较其他UNet方法具有更优的分割性能。  相似文献   

15.
BackgroundFully automatic medical image segmentation has been a long pursuit in radiotherapy (RT). Recent developments involving deep learning show promising results yielding consistent and time efficient contours. In order to train and validate these systems, several geometric based metrics, such as Dice Similarity Coefficient (DSC), Hausdorff, and other related metrics are currently the standard in automated medical image segmentation challenges. However, the relevance of these metrics in RT is questionable. The quality of automated segmentation results needs to reflect clinical relevant treatment outcomes, such as dosimetry and related tumor control and toxicity. In this study, we present results investigating the correlation between popular geometric segmentation metrics and dose parameters for Organs-At-Risk (OAR) in brain tumor patients, and investigate properties that might be predictive for dose changes in brain radiotherapy.MethodsA retrospective database of glioblastoma multiforme patients was stratified for planning difficulty, from which 12 cases were selected and reference sets of OARs and radiation targets were defined. In order to assess the relation between segmentation quality -as measured by standard segmentation assessment metrics- and quality of RT plans, clinically realistic, yet alternative contours for each OAR of the selected cases were obtained through three methods: (i) Manual contours by two additional human raters. (ii) Realistic manual manipulations of reference contours. (iii) Through deep learning based segmentation results. On the reference structure set a reference plan was generated that was re-optimized for each corresponding alternative contour set. The correlation between segmentation metrics, and dosimetric changes was obtained and analyzed for each OAR, by means of the mean dose and maximum dose to 1% of the volume (Dmax 1%). Furthermore, we conducted specific experiments to investigate the dosimetric effect of alternative OAR contours with respect to the proximity to the target, size, particular shape and relative location to the target.ResultsWe found a low correlation between the DSC, reflecting the alternative OAR contours, and dosimetric changes. The Pearson correlation coefficient between the mean OAR dose effect and the Dice was -0.11. For Dmax 1%, we found a correlation of -0.13. Similar low correlations were found for 22 other segmentation metrics. The organ based analysis showed that there is a better correlation for the larger OARs (i.e. brainstem and eyes) as for the smaller OARs (i.e. optic nerves and chiasm). Furthermore, we found that proximity to the target does not make contour variations more susceptible to the dose effect. However, the direction of the contour variation with respect to the relative location of the target seems to have a strong correlation with the dose effect.ConclusionsThis study shows a low correlation between segmentation metrics and dosimetric changes for OARs in brain tumor patients. Results suggest that the current metrics for image segmentation in RT, as well as deep learning systems employing such metrics, need to be revisited towards clinically oriented metrics that better reflect how segmentation quality affects dose distribution and related tumor control and toxicity.  相似文献   

16.
Khan AR  Wang L  Beg MF 《NeuroImage》2008,41(3):735-746
Fully-automated brain segmentation methods have not been widely adopted for clinical use because of issues related to reliability, accuracy, and limitations of delineation protocol. By combining the probabilistic-based FreeSurfer (FS) method with the Large Deformation Diffeomorphic Metric Mapping (LDDMM)-based label-propagation method, we are able to increase reliability and accuracy, and allow for flexibility in template choice. Our method uses the automated FreeSurfer subcortical labeling to provide a coarse-to-fine introduction of information in the LDDMM template-based segmentation resulting in a fully-automated subcortical brain segmentation method (FS+LDDMM). One major advantage of the FS+LDDMM-based approach is that the automatically generated segmentations generated are inherently smooth, thus subsequent steps in shape analysis can directly follow without manual post-processing or loss of detail. We have evaluated our new FS+LDDMM method on several databases containing a total of 50 subjects with different pathologies, scan sequences and manual delineation protocols for labeling the basal ganglia, thalamus, and hippocampus. In healthy controls we report Dice overlap measures of 0.81, 0.83, 0.74, 0.86 and 0.75 for the right caudate nucleus, putamen, pallidum, thalamus and hippocampus respectively. We also find statistically significant improvement of accuracy in FS+LDDMM over FreeSurfer for the caudate nucleus and putamen of Huntington's disease and Tourette's syndrome subjects, and the right hippocampus of Schizophrenia subjects.  相似文献   

17.
Brain extraction is an important step in the analysis of brain images. The variability in brain morphology and the difference in intensity characteristics due to imaging sequences make the development of a general purpose brain extraction algorithm challenging. To address this issue, we propose a new robust method (BEaST) dedicated to produce consistent and accurate brain extraction. This method is based on nonlocal segmentation embedded in a multi-resolution framework. A library of 80 priors is semi-automatically constructed from the NIH-sponsored MRI study of normal brain development, the International Consortium for Brain Mapping, and the Alzheimer's Disease Neuroimaging Initiative databases. In testing, a mean Dice similarity coefficient of 0.9834±0.0053 was obtained when performing leave-one-out cross validation selecting only 20 priors from the library. Validation using the online Segmentation Validation Engine resulted in a top ranking position with a mean Dice coefficient of 0.9781±0.0047. Robustness of BEaST is demonstrated on all baseline ADNI data, resulting in a very low failure rate. The segmentation accuracy of the method is better than two widely used publicly available methods and recent state-of-the-art hybrid approaches. BEaST provides results comparable to a recent label fusion approach, while being 40 times faster and requiring a much smaller library of priors.  相似文献   

18.
Automated segmentation of pancreatic cancer is vital for clinical diagnosis and treatment. However, the small size and inconspicuous boundaries limit the segmentation performance, which is further exacerbated for deep learning techniques with the few training samples due to the high threshold of image acquisition and annotation. To alleviate this issue caused by the small-scale dataset, we collect idle multi-parametric MRIs of pancreatic cancer from different studies to construct a relatively large dataset for enhancing the CT pancreatic cancer segmentation. Therefore, we propose a deep learning segmentation model with the dual meta-learning framework for pancreatic cancer. It can integrate the common knowledge of tumors obtained from idle MRIs and salient knowledge from CT images, making high-level features more discriminative. Specifically, the random intermediate modalities between MRIs and CT are first generated to smoothly fill in the gaps in visual appearance and provide rich intermediate representations for ensuing meta-learning scheme. Subsequently, we employ intermediate modalities-based model-agnostic meta-learning to capture and transfer commonalities. At last, a meta-optimizer is utilized to adaptively learn the salient features within CT data, thus alleviating the interference due to internal differences. Comprehensive experimental results demonstrated our method achieved the promising segmentation performance, with a max Dice score of 64.94% on our private dataset, and outperformed state-of-the-art methods on a public pancreatic cancer CT dataset. The proposed method is an effective pancreatic cancer segmentation framework, which can be easily integrated into other segmentation networks and thus promises to be a potential paradigm for alleviating data scarcity challenges using idle data.  相似文献   

19.
Over the last decade, convolutional neural networks have emerged and advanced the state-of-the-art in various image analysis and computer vision applications. The performance of 2D image classification networks is constantly improving and being trained on databases made of millions of natural images. Conversely, in the field of medical image analysis, the progress is also remarkable but has mainly slowed down due to the relative lack of annotated data and besides, the inherent constraints related to the acquisition process. These limitations are even more pronounced given the volumetry of medical imaging data. In this paper, we introduce an efficient way to transfer the efficiency of a 2D classification network trained on natural images to 2D, 3D uni- and multi-modal medical image segmentation applications. In this direction, we designed novel architectures based on two key principles: weight transfer by embedding a 2D pre-trained encoder into a higher dimensional U-Net, and dimensional transfer by expanding a 2D segmentation network into a higher dimension one. The proposed networks were tested on benchmarks comprising different modalities: MR, CT, and ultrasound images. Our 2D network ranked first on the CAMUS challenge dedicated to echo-cardiographic data segmentation and surpassed the state-of-the-art. Regarding 2D/3D MR and CT abdominal images from the CHAOS challenge, our approach largely outperformed the other 2D-based methods described in the challenge paper on Dice, RAVD, ASSD, and MSSD scores and ranked third on the online evaluation platform. Our 3D network applied to the BraTS 2022 competition also achieved promising results, reaching an average Dice score of 91.69% (91.22%) for the whole tumor, 83.23% (84.77%) for the tumor core and 81.75% (83.88%) for enhanced tumor using the approach based on weight (dimensional) transfer. Experimental and qualitative results illustrate the effectiveness of our methods for multi-dimensional medical image segmentation.  相似文献   

20.
Conditional Random Fields (CRFs) are often used to improve the output of an initial segmentation model, such as a convolutional neural network (CNN). Conventional CRF approaches in medical imaging use manually defined features, such as intensity to improve appearance similarity or location to improve spatial coherence. These features work well for some tasks, but can fail for others. For example, in medical image segmentation applications where different anatomical structures can have similar intensity values, an intensity-based CRF may produce incorrect results. As an alternative, we propose Posterior-CRF, an end-to-end segmentation method that uses CNN-learned features in a CRF and optimizes the CRF and CNN parameters concurrently. We validate our method on three medical image segmentation tasks: aorta and pulmonary artery segmentation in non-contrast CT, white matter hyperintensities segmentation in multi-modal MRI, and ischemic stroke lesion segmentation in multi-modal MRI. We compare this with the state-of-the-art CNN-CRF methods. In all applications, our proposed method outperforms the existing methods in terms of Dice coefficient, average volume difference, and lesion-wise F1 score.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号