首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Annotating multiple organs in medical images is both costly and time-consuming; therefore, existing multi-organ datasets with labels are often low in sample size and mostly partially labeled, that is, a dataset has a few organs labeled but not all organs. In this paper, we investigate how to learn a single multi-organ segmentation network from a union of such datasets. To this end, we propose two types of novel loss function, particularly designed for this scenario: (i) marginal loss and (ii) exclusion loss. Because the background label for a partially labeled image is, in fact, a ‘merged’ label of all unlabelled organs and ‘true’ background (in the sense of full labels), the probability of this ‘merged’ background label is a marginal probability, summing the relevant probabilities before merging. This marginal probability can be plugged into any existing loss function (such as cross entropy loss, Dice loss, etc.) to form a marginal loss. Leveraging the fact that the organs are non-overlapping, we propose the exclusion loss to gauge the dissimilarity between labeled organs and the estimated segmentation of unlabelled organs. Experiments on a union of five benchmark datasets in multi-organ segmentation of liver, spleen, left and right kidneys, and pancreas demonstrate that using our newly proposed loss functions brings a conspicuous performance improvement for state-of-the-art methods without introducing any extra computation.  相似文献   

2.
Supervised deep learning-based methods yield accurate results for medical image segmentation. However, they require large labeled datasets for this, and obtaining them is a laborious task that requires clinical expertise. Semi/self-supervised learning-based approaches address this limitation by exploiting unlabeled data along with limited annotated data. Recent self-supervised learning methods use contrastive loss to learn good global level representations from unlabeled images and achieve high performance in classification tasks on popular natural image datasets like ImageNet. In pixel-level prediction tasks such as segmentation, it is crucial to also learn good local level representations along with global representations to achieve better accuracy. However, the impact of the existing local contrastive loss-based methods remains limited for learning good local representations because similar and dissimilar local regions are defined based on random augmentations and spatial proximity; not based on the semantic label of local regions due to lack of large-scale expert annotations in the semi/self-supervised setting. In this paper, we propose a local contrastive loss to learn good pixel level features useful for segmentation by exploiting semantic label information obtained from pseudo-labels of unlabeled images alongside limited annotated images with ground truth (GT) labels. In particular, we define the proposed contrastive loss to encourage similar representations for the pixels that have the same pseudo-label/GT label while being dissimilar to the representation of pixels with different pseudo-label/GT label in the dataset. We perform pseudo-label based self-training and train the network by jointly optimizing the proposed contrastive loss on both labeled and unlabeled sets and segmentation loss on only the limited labeled set. We evaluated the proposed approach on three public medical datasets of cardiac and prostate anatomies, and obtain high segmentation performance with a limited labeled set of one or two 3D volumes. Extensive comparisons with the state-of-the-art semi-supervised and data augmentation methods and concurrent contrastive learning methods demonstrate the substantial improvement achieved by the proposed method. The code is made publicly available at https://github.com/krishnabits001/pseudo_label_contrastive_training.  相似文献   

3.
Training deep learning models that segment an image in one step typically requires a large collection of manually annotated images that captures the anatomical variability in a cohort. This poses challenges when anatomical variability is extreme but training data is limited, as when segmenting cardiac structures in patients with congenital heart disease (CHD). In this paper, we propose an iterative segmentation model and show that it can be accurately learned from a small dataset. Implemented as a recurrent neural network, the model evolves a segmentation over multiple steps, from a single user click until reaching an automatically determined stopping point. We develop a novel loss function that evaluates the entire sequence of output segmentations, and use it to learn model parameters. Segmentations evolve predictably according to growth dynamics encapsulated by training data, which consists of images, partially completed segmentations, and the recommended next step. The user can easily refine the final segmentation by examining those that are earlier or later in the output sequence. Using a dataset of 3D cardiac MR scans from patients with a wide range of CHD types, we show that our iterative model offers better generalization to patients with the most severe heart malformations.  相似文献   

4.
5.
Sixty sets of real data for 15 different pesticides from both sexes of Balb/C mice in two different experimental designs were generated at NCTR. The quantal responses for the dose groups in this data ranged from 1% to 90%. It was shown that the data could be represented equally well by a probit or logit transformation. It was further shown that the investment in terms of 7 times as many animals would greatly increase the confidence in estimating the parameters of the model and in predicting the dose at the low end of the dose response. Most important, it was shown that the estimation of a safe dose for a specified risk was greatly influenced by the choice of experimental design and method of extrapolation. It might be worth the investment in better experimental design to both the consumer and to the chemical industry if higher safe doses could be established which would allow the chemical to better accomplish its purpose and yet improve the assurance of the safety of the consumer.  相似文献   

6.
Our work expands the use of capsule networks to the task of object segmentation for the first time in the literature. This is made possible via the introduction of locally-constrained routing and transformation matrix sharing, which reduces the parameter/memory burden and allows for the segmentation of objects at large resolutions. To compensate for the loss of global information in constraining the routing, we propose the concept of “deconvolutional” capsules to create a deep encoder-decoder style network, called SegCaps. We extend the masked reconstruction regularization to the task of segmentation and perform thorough ablation experiments on each component of our method. The proposed convolutional-deconvolutional capsule network, SegCaps, shows state-of-the-art results while using a fraction of the parameters of popular segmentation networks. To validate our proposed method, we perform experiments segmenting pathological lungs from clinical and pre-clinical thoracic computed tomography (CT) scans and segmenting muscle and adipose (fat) tissue from magnetic resonance imaging (MRI) scans of human subjects’ thighs. Notably, our experiments in lung segmentation represent the largest-scale study in pathological lung segmentation in the literature, where we conduct experiments across five extremely challenging datasets, containing both clinical and pre-clinical subjects, and nearly 2000 computed-tomography scans. Our newly developed segmentation platform outperforms other methods across all datasets while utilizing less than 5% of the parameters in the popular U-Net for biomedical image segmentation. Further, we demonstrate capsules’ ability to generalize to unseen handling of rotations/reflections on natural images.  相似文献   

7.
In this paper we report and characterize a semi-automatic prostate segmentation method for prostate brachytherapy. Based on anatomical evidence and requirements of the treatment procedure, a warped and tapered ellipsoid was found suitable as the a-priori 3D shape of the prostate. By transforming the acquired endorectal transverse images of the prostate into ellipses, the shape fitting problem was cast into a convex problem which can be solved efficiently. The average whole gland error between non-overlapping volumes created from manual and semi-automatic contours from 21 patients was 6.63 ± 0.9%. For use in brachytherapy treatment planning, the resulting contours were modified, if deemed necessary, by radiation oncologists prior to treatment. The average whole gland volume error between the volumes computed from semi-automatic contours and those computed from modified contours, from 40 patients, was 5.82 ± 4.15%. The amount of bias in the physicians' delineations when given an initial semi-automatic contour was measured by comparing the volume error between 10 prostate volumes computed from manual contours with those of modified contours. This error was found to be 7.25 ± 0.39% for the whole gland. Automatic contouring reduced subjectivity, as evidenced by a decrease in segmentation inter- and intra-observer variability from 4.65% and 5.95% for manual segmentation to 3.04% and 3.48% for semi-automatic segmentation, respectively. We characterized the performance of the method relative to the reference obtained from manual segmentation by using a novel approach that divides the prostate region into nine sectors. We analyzed each sector independently as the requirements for segmentation accuracy depend on which region of the prostate is considered. The measured segmentation time is 14 ± 1s with an additional 32 ± 14s for initialization. By assuming 1-3 min for modification of the contours, if necessary, a total segmentation time of less than 4 min is required, with no additional time required prior to treatment planning. This compares favorably to the 5-15 min manual segmentation time required for experienced individuals. The method is currently used at the British Columbia Cancer Agency (BCCA) Vancouver Cancer Centre as part of the standard treatment routine in low dose rate prostate brachytherapy and is found to be a fast, consistent and accurate tool for the delineation of the prostate gland in ultrasound images.  相似文献   

8.
9.
Segmentation of lung pathology in Computed Tomography (CT) images is of great importance for lung disease screening. However, the presence of different types of lung pathologies with a wide range of heterogeneities in size, shape, location, and texture, on one side, and their visual similarity with respect to surrounding tissues, on the other side, make it challenging to perform reliable automatic lesion segmentation. To leverage segmentation performance, we propose a deep learning framework comprising a Normal Appearance Autoencoder (NAA) model to learn the distribution of healthy lung regions and reconstruct pathology-free images from the corresponding pathological inputs by replacing the pathological regions with the characteristics of healthy tissues. Detected regions that represent prior information regarding the shape and location of pathologies are then integrated into a segmentation network to guide the attention of the model into more meaningful delineations. The proposed pipeline was tested on three types of lung pathologies, including pulmonary nodules, Non-Small Cell Lung Cancer (NSCLC), and Covid-19 lesion on five comprehensive datasets. The results show the superiority of the proposed prior model, which outperformed the baseline segmentation models in all the cases with significant margins. On average, adding the prior model improved the Dice coefficient for the segmentation of lung nodules by 0.038, NSCLCs by 0.101, and Covid-19 lesions by 0.041. We conclude that the proposed NAA model produces reliable prior knowledge regarding the lung pathologies, and integrating such knowledge into a prior segmentation network leads to more accurate delineations.  相似文献   

10.
Defining myocardial contours is often the most time-consuming portion of dynamic cardiac MRI image analysis. Displacement encoding with stimulated echoes (DENSE) is a quantitative MRI technique that encodes tissue displacement into the phase of the complex MRI images. Cine DENSE provides a time series of these images, thus facilitating the non-invasive study of myocardial kinematics. Epicardial and endocardial contours need to be defined at each frame on cine DENSE images for the quantification of regional displacement and strain as a function of time. This work presents a reliable and effective two-dimensional semi-automated segmentation technique that uses the encoded motion to project a manually-defined region of interest through time. Contours can then easily be extracted for each cardiac phase. This method boasts several advantages, including, (1) parameters are based on practical physiological limits, (2) contours are calculated for the first few cardiac phases, where it is difficult to visually distinguish blood from myocardium, and (3) the method is independent of the shape of the tissue delineated and can be applied to short- or long-axis views, and on arbitrary regions of interest. Motion-guided contours were compared to manual contours for six conventional and six slice-followed mid-ventricular short-axis cine DENSE datasets. Using an area measure of segmentation error, the accuracy of the segmentation algorithm was shown to be similar to inter-observer variability. In addition, a radial segmentation error metric was introduced for short-axis data. The average radial epicardial segmentation error was 0.36+/-0.08 and 0.40+/-0.10 pixels for slice-followed and conventional cine DENSE, respectively, and the average radial endocardial segmentation error was 0.46+/-0.12 and 0.46+/-0.16 pixels for slice following and conventional cine DENSE, respectively. Motion-guided segmentation employs the displacement-encoded phase shifts intrinsic to DENSE MRI to accurately propagate a single set of pre-defined contours throughout the remaining cardiac phases.  相似文献   

11.
Optimum template selection for atlas-based segmentation   总被引:1,自引:0,他引:1  
Atlas-based segmentation of MR brain images typically uses a single atlas (e.g., MNI Colin27) for region identification. Normal individual variations in human brain structures present a significant challenge for atlas selection. Previous researches mainly focused on how to create a specific template for different requirements (e.g., for a certain population). We address atlas selection with a different approach: instead of choosing a fixed brain atlas, we use a family of brain templates for atlas-based segmentation. For each subject and each region, the template selection method automatically chooses the 'best' template with the highest local registration accuracy, based on normalized mutual information. The region classification performances of the template selection method and the single template method were quantified by the overlap ratios (ORs) and intraclass correlation coefficients (ICCs) between the manual tracings and the respective automated labeled results. Two groups of brain images and multiple regions of interest (ROIs), including the right anterior cingulate cortex (ACC) and several subcortical structures, were tested for both methods. We found that the template selection method produced significantly higher ORs than did the single template method across all of the 13 analyzed ROIs (two-tailed paired t-test, right ACC at t(8)=4.353, p=0.0024; right amygdala, matched paired t test t(8)>3.175, p<0.013; for the remaining ROIs, t(8)=4.36, p<0.002). The template selection method also provided more reliable volume estimates than the single template method with increased ICCs. Moreover, the improved accuracy of atlas-based segmentation using optimum templates approaches the accuracy of manual tracing, and thus is valid for automated brain imaging analyses.  相似文献   

12.
13.

Purpose

In this paper, we investigate a framework for interactive brain tumor segmentation which, at its core, treats the problem of interactive brain tumor segmentation as a machine learning problem.

Methods

This method has an advantage over typical machine learning methods for this task where generalization is made across brains. The problem with these methods is that they need to deal with intensity bias correction and other MRI-specific noise. In this paper, we avoid these issues by approaching the problem as one of within brain generalization. Specifically, we propose a semi-automatic method that segments a brain tumor by training and generalizing within that brain only, based on some minimum user interaction.

Conclusion

We investigate how adding spatial feature coordinates (i.e., i, j, k) to the intensity features can significantly improve the performance of different classification methods such as SVM, kNN and random forests. This would only be possible within an interactive framework. We also investigate the use of a more appropriate kernel and the adaptation of hyper-parameters specifically for each brain.

Results

As a result of these experiments, we obtain an interactive method whose results reported on the MICCAI-BRATS 2013 dataset are the second most accurate compared to published methods, while using significantly less memory and processing power than most state-of-the-art methods.
  相似文献   

14.
Domain adaptation (DA) has drawn high interest for its capacity to adapt a model trained on labeled source data to perform well on unlabeled or weakly labeled target data from a different domain. Most common DA techniques require concurrent access to the input images of both the source and target domains. However, in practice, privacy concerns often impede the availability of source images in the adaptation phase. This is a very frequent DA scenario in medical imaging, where, for instance, the source and target images could come from different clinical sites. We introduce a source-free domain adaptation for image segmentation. Our formulation is based on minimizing a label-free entropy loss defined over target-domain data, which we further guide with weak labels of the target samples and a domain-invariant prior on the segmentation regions. Many priors can be derived from anatomical information. Here, a class-ratio prior is estimated from anatomical knowledge and integrated in the form of a Kullback–Leibler (KL) divergence in our overall loss function. Furthermore, we motivate our overall loss with an interesting link to maximizing the mutual information between the target images and their label predictions. We show the effectiveness of our prior-aware entropy minimization in a variety of domain-adaptation scenarios, with different modalities and applications, including spine, prostate and cardiac segmentation. Our method yields comparable results to several state-of-the-art adaptation techniques, despite having access to much less information, as the source images are entirely absent in our adaptation phase. Our straightforward adaptation strategy uses only one network, contrary to popular adversarial techniques, which are not applicable to a source-free DA setting. Our framework can be readily used in a breadth of segmentation problems, and our code is publicly available: https://github.com/mathilde-b/SFDA.  相似文献   

15.
During the last few decades, the treatment of HIV-infected patients by highly active antiretroviral therapy, including protease inhibitors (PIs), has become standard. Here, we present results of analysis of a patient-derived, multiresistant HIV-1 CRF02_AG recombinant strain with a highly mutated protease (PR) coding sequence, where up to 19 coding mutations have accumulated in the PR. The results of biochemical analysis in vitro showed that the patient-derived PR is highly resistant to most of the currently used PIs and that it also exhibits very poor catalytic activity. Determination of the crystal structure revealed prominent changes in the flap elbow region and S1/S1' active site subsites. While viral loads in the patient were found to be high, the insertion of the patient-derived PR into a HIV-1 subtype B backbone resulted in reduction of infectivity by 3 orders of magnitude. Fitness compensation was not achieved by elevated polymerase (Pol) expression, but the introduction of patient-derived gag and pol sequences in a CRF02_AG backbone rescued viral infectivity to near wild-type (wt) levels. The mutations that accumulated in the vicinity of the processing sites spanning the p2/NC, NC/p1, and p6pol/PR proteins lead to much more efficient hydrolysis of corresponding peptides by patient-derived PR in comparison to the wt enzyme. This indicates a very efficient coevolution of enzyme and substrate maintaining high viral loads in vivo under constant drug pressure.  相似文献   

16.
The incorporation of intensity, spatial, and topological information into large-scale multi-region segmentation has been a topic of ongoing research in medical image analysis. Multi-region segmentation problems, such as segmentation of brain structures, pose unique challenges in image segmentation in which regions may not have a defined intensity, spatial, or topological distinction, but rely on a combination of the three. We propose a novel framework within the Advanced segmentation tools (ASETS)2, which combines large-scale Gaussian mixture models trained via Kohonen self-organizing maps, with deformable registration, and a convex max-flow optimization algorithm incorporating region topology as a hierarchy or tree. Our framework is validated on two publicly available neuroimaging datasets, the OASIS and MRBrainS13 databases, against the more conventional Potts model, achieving more accurate segmentations. Each component is accelerated using general-purpose programming on graphics processing Units to ensure computational feasibility.  相似文献   

17.
18.
Automatic medical image segmentation plays a crucial role in many medical image analysis applications, such as disease diagnosis and prognosis. Despite the extensive progress of existing deep learning based models for medical image segmentation, they focus on extracting accurate features by designing novel network structures and solely utilize fully connected (FC) layer for pixel-level classification. Considering the insufficient capability of FC layer to encode the extracted diverse feature representations, we propose a Hierarchical Segmentation (HieraSeg) Network for medical image segmentation and devise a Hierarchical Fully Connected (HFC) layer. Specifically, it consists of three classifiers and decouples each category into several subcategories by introducing multiple weight vectors to denote the diverse characteristics in each category. A subcategory-level and a category-level learning schemes are then designed to explicitly enforce the discrepant subcategories and automatically capture the most representative characteristics. Hence, the HFC layer can fit the variant characteristics so as to derive an accurate decision boundary. To enhance the robustness of HieraSeg Network with the variability of lesions, we further propose a Dynamic-Weighting HieraSeg (DW-HieraSeg) Network, which introduces an Image-level Weight Net (IWN) and a Pixel-level Weight Net (PWN) to learn data-driven curriculum. Through progressively incorporating informative images and pixels in an easy-to-hard manner, DW-HieraSeg Network is able to eliminate local optimums and accelerate the training process. Additionally, a class balanced loss is proposed to constrain the PWN for preventing the overfitting problem in minority category. Comprehensive experiments on three benchmark datasets, EndoScene, ISIC and Decathlon, show our newly proposed HieraSeg and DW-HieraSeg Networks achieve state-of-the-art performance, which clearly demonstrates the effectiveness of the proposed approaches for medical image segmentation.  相似文献   

19.
Automatic and accurate segmentation of anatomical structures on medical images is crucial for detecting various potential diseases. However, the segmentation performance of established deep neural networks may degenerate on different modalities or devices owing to the significant difference across the domains, a problem known as domain shift. In this work, we propose an uncertainty-aware domain alignment framework to address the domain shift problem in the cross-domain Unsupervised Domain Adaptation (UDA) task. Specifically, we design an Uncertainty Estimation and Segmentation Module (UESM) to obtain the uncertainty map estimation. Then, a novel Uncertainty-aware Cross Entropy (UCE) loss is proposed to leverage the uncertainty information to boost the segmentation performance on highly uncertain regions. To further improve the performance in the UDA task, an Uncertainty-aware Self-Training (UST) strategy is developed to choose the optimal target samples by uncertainty guidance. In addition, the Uncertainty Feature Recalibration Module (UFRM) is applied to enforce the framework to minimize the cross-domain discrepancy. The proposed framework is evaluated on a private cross-device Optical Coherence Tomography (OCT) dataset and a public cross-modality cardiac dataset released by MMWHS 2017. Extensive experiments indicate that the proposed UESM is both efficient and effective for the uncertainty estimation in the UDA task, achieving state-of-the-art performance on both cross-modality and cross-device datasets.  相似文献   

20.
In this paper we present a new algorithm for 3D medical image segmentation. The algorithm is versatile, fast, relatively simple to implement, and semi-automatic. It is based on minimizing a global energy defined from a learned non-parametric estimation of the statistics of the region to be segmented. Implementation details are discussed and source code is freely available as part of the 3D Slicer project. In addition, a new unified set of validation metrics is proposed. Results on artificial and real MRI images show that the algorithm performs well on large brain structures both in terms of accuracy and robustness to noise.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号