首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Multi-atlas segmentation provides a general purpose, fully-automated approach for transferring spatial information from an existing dataset (“atlases”) to a previously unseen context (“target”) through image registration. The method to resolve voxelwise label conflicts between the registered atlases (“label fusion”) has a substantial impact on segmentation quality. Ideally, statistical fusion algorithms (e.g., STAPLE) would result in accurate segmentations as they provide a framework to elegantly integrate models of rater performance. The accuracy of statistical fusion hinges upon accurately modeling the underlying process of how raters err. Despite success on human raters, current approaches inaccurately model multi-atlas behavior as they fail to seamlessly incorporate exogenous intensity information into the estimation process. As a result, locally weighted voting algorithms represent the de facto standard fusion approach in clinical applications. Moreover, regardless of the approach, fusion algorithms are generally dependent upon large atlas sets and highly accurate registration as they implicitly assume that the registered atlases form a collectively unbiased representation of the target. Herein, we propose a novel statistical fusion algorithm, Non-Local STAPLE (NLS). NLS reformulates the STAPLE framework from a non-local means perspective in order to learn what label an atlas would have observed, given perfect correspondence. Through this reformulation, NLS (1) seamlessly integrates intensity into the estimation process, (2) provides a theoretically consistent model of multi-atlas observation error, and (3) largely diminishes the need for large atlas sets and very high-quality registrations. We assess the sensitivity and optimality of the approach and demonstrate significant improvement in two empirical multi-atlas experiments.  相似文献   

2.
Accurate segmentation of anatomical structures in medical images is important in recent imaging based studies. In the past years, multi-atlas patch-based label fusion methods have achieved a great success in medical image segmentation. In these methods, the appearance of each input image patch is first represented by an atlas patch dictionary (in the image domain), and then the latent label of the input image patch is predicted by applying the estimated representation coefficients to the corresponding anatomical labels of the atlas patches in the atlas label dictionary (in the label domain). However, due to the generally large gap between the patch appearance in the image domain and the patch structure in the label domain, the estimated (patch) representation coefficients from the image domain may not be optimal for the final label fusion, thus reducing the labeling accuracy. To address this issue, we propose a novel label fusion framework to seek for the suitable label fusion weights by progressively constructing a dynamic dictionary in a layer-by-layer manner, where the intermediate dictionaries act as a sequence of guidance to steer the transition of (patch) representation coefficients from the image domain to the label domain. Our proposed multi-layer label fusion framework is flexible enough to be applied to the existing labeling methods for improving their label fusion performance, i.e., by extending their single-layer static dictionary to the multi-layer dynamic dictionary. The experimental results show that our proposed progressive label fusion method achieves more accurate hippocampal segmentation results for the ADNI dataset, compared to the counterpart methods using only the single-layer static dictionary.  相似文献   

3.
《Medical image analysis》2014,18(6):881-890
Automated labeling of anatomical structures in medical images is very important in many neuroscience studies. Recently, patch-based labeling has been widely investigated to alleviate the possible mis-alignment when registering atlases to the target image. However, the weights used for label fusion from the registered atlases are generally computed independently and thus lack the capability of preventing the ambiguous atlas patches from contributing to the label fusion. More critically, these weights are often calculated based only on the simple patch similarity, thus not necessarily providing optimal solution for label fusion. To address these limitations, we propose a generative probability model to describe the procedure of label fusion in a multi-atlas scenario, for the goal of labeling each point in the target image by the best representative atlas patches that also have the largest labeling unanimity in labeling the underlying point correctly. Specifically, sparsity constraint is imposed upon label fusion weights, in order to select a small number of atlas patches that best represent the underlying target patch, thus reducing the risks of including the misleading atlas patches. The labeling unanimity among atlas patches is achieved by exploring their dependencies, where we model these dependencies as the joint probability of each pair of atlas patches in correctly predicting the labels, by analyzing the correlation of their morphological error patterns and also the labeling consensus among atlases. The patch dependencies will be further recursively updated based on the latest labeling results to correct the possible labeling errors, which falls to the Expectation Maximization (EM) framework. To demonstrate the labeling performance, we have comprehensively evaluated our patch-based labeling method on the whole brain parcellation and hippocampus segmentation. Promising labeling results have been achieved with comparison to the conventional patch-based labeling method, indicating the potential application of the proposed method in the future clinical studies.  相似文献   

4.
Deep convolutional neural networks (DCNN) achieve very high accuracy in segmenting various anatomical structures in medical images but often suffer from relatively poor generalizability. Multi-atlas segmentation (MAS), while less accurate than DCNN in many applications, tends to generalize well to unseen datasets with different characteristics from the training dataset. Several groups have attempted to integrate the power of DCNN to learn complex data representations and the robustness of MAS to changes in image characteristics. However, these studies primarily focused on replacing individual components of MAS with DCNN models and reported marginal improvements in accuracy. In this study we describe and evaluate a 3D end-to-end hybrid MAS and DCNN segmentation pipeline, called Deep Label Fusion (DLF). The DLF pipeline consists of two main components with learnable weights, including a weighted voting subnet that mimics the MAS algorithm and a fine-tuning subnet that corrects residual segmentation errors to improve final segmentation accuracy. We evaluate DLF on five datasets that represent a diversity of anatomical structures (medial temporal lobe subregions and lumbar vertebrae) and imaging modalities (multi-modality, multi-field-strength MRI and Computational Tomography). These experiments show that DLF achieves comparable segmentation accuracy to nnU-Net (Isensee et al., 2020), the state-of-the-art DCNN pipeline, when evaluated on a dataset with similar characteristics to the training datasets, while outperforming nnU-Net on tasks that involve generalization to datasets with different characteristics (different MRI field strength or different patient population). DLF is also shown to consistently improve upon conventional MAS methods. In addition, a modality augmentation strategy tailored for multimodal imaging is proposed and demonstrated to be beneficial in improving the segmentation accuracy of learning-based methods, including DLF and DCNN, in missing data scenarios in test time as well as increasing the interpretability of the contribution of each individual modality.  相似文献   

5.
《Medical image analysis》2015,19(8):1262-1273
We propose a method for multi-atlas label propagation (MALP) based on encoding the individual atlases by randomized classification forests. Most current approaches perform a non-linear registration between all atlases and the target image, followed by a sophisticated fusion scheme. While these approaches can achieve high accuracy, in general they do so at high computational cost. This might negatively affect the scalability to large databases and experimentation. To tackle this issue, we propose to use a small and deep classification forest to encode each atlas individually in reference to an aligned probabilistic atlas, resulting in an Atlas Forest (AF). Our classifier-based encoding differs from current MALP approaches, which represent each point in the atlas either directly as a single image/label value pair, or by a set of corresponding patches. At test time, each AF produces one probabilistic label estimate, and their fusion is done by averaging. Our scheme performs only one registration per target image, achieves good results with a simple fusion scheme, and allows for efficient experimentation. In contrast to standard forest schemes, in which each tree would be trained on all atlases, our approach retains the advantages of the standard MALP framework. The target-specific selection of atlases remains possible, and incorporation of new scans is straightforward without retraining. The evaluation on four different databases shows accuracy within the range of the state of the art at a significantly lower running time.  相似文献   

6.
《Medical image analysis》2014,18(8):1262-1273
We propose a method for multi-atlas label propagation (MALP) based on encoding the individual atlases by randomized classification forests. Most current approaches perform a non-linear registration between all atlases and the target image, followed by a sophisticated fusion scheme. While these approaches can achieve high accuracy, in general they do so at high computational cost. This might negatively affect the scalability to large databases and experimentation. To tackle this issue, we propose to use a small and deep classification forest to encode each atlas individually in reference to an aligned probabilistic atlas, resulting in an Atlas Forest (AF). Our classifier-based encoding differs from current MALP approaches, which represent each point in the atlas either directly as a single image/label value pair, or by a set of corresponding patches. At test time, each AF produces one probabilistic label estimate, and their fusion is done by averaging. Our scheme performs only one registration per target image, achieves good results with a simple fusion scheme, and allows for efficient experimentation. In contrast to standard forest schemes, in which each tree would be trained on all atlases, our approach retains the advantages of the standard MALP framework. The target-specific selection of atlases remains possible, and incorporation of new scans is straightforward without retraining. The evaluation on four different databases shows accuracy within the range of the state of the art at a significantly lower running time.  相似文献   

7.
8.
《Medical image analysis》2014,18(1):118-129
Comprehensive visual and quantitative analysis of in vivo human mitral valve morphology is central to the diagnosis and surgical treatment of mitral valve disease. Real-time 3D transesophageal echocardiography (3D TEE) is a practical, highly informative imaging modality for examining the mitral valve in a clinical setting. To facilitate visual and quantitative 3D TEE image analysis, we describe a fully automated method for segmenting the mitral leaflets in 3D TEE image data. The algorithm integrates complementary probabilistic segmentation and shape modeling techniques (multi-atlas joint label fusion and deformable modeling with continuous medial representation) to automatically generate 3D geometric models of the mitral leaflets from 3D TEE image data. These models are unique in that they establish a shape-based coordinate system on the valves of different subjects and represent the leaflets volumetrically, as structures with locally varying thickness. In this work, expert image analysis is the gold standard for evaluating automatic segmentation. Without any user interaction, we demonstrate that the automatic segmentation method accurately captures patient-specific leaflet geometry at both systole and diastole in 3D TEE data acquired from a mixed population of subjects with normal valve morphology and mitral valve disease.  相似文献   

9.
We developed a novel method for spatially-local selection of atlas-weights in multi-atlas segmentation that combines supervised learning on a training set and dynamic information in the form of local registration accuracy estimates (SuperDyn). Supervised learning was applied using a jackknife learning approach and the methods were evaluated using leave-N-out cross-validation. We applied our segmentation method to hippocampal segmentation in 1.5T and 3T MRI from two datasets: 69 healthy middle-aged subjects (aged 44-49) and 37 healthy and cognitively-impaired elderly subjects (aged 72-84). Mean Dice overlap scores (left hippocampus, right hippocampus) of (83.3, 83.2) and (85.1, 85.3) from the respective datasets were found to be significantly higher than those obtained via equally-weighted fusion, STAPLE, and dynamic fusion. In addition to global surface distance and volume metrics, we also investigated accuracy at a spatially-local scale using a surface-based segmentation performance assessment method (SurfSPA), which generates cohort-specific maps of segmentation accuracy quantified by inward or outward displacement relative to the manual segmentations. These measurements indicated greater agreement with manual segmentation and lower variability for the proposed segmentation method, as compared to equally-weighted fusion.  相似文献   

10.
The recent increased interest in information fusion methods for solving complex problem, such as in image analysis, is motivated by the wish to better exploit the multitude of information, available from different sources, to enhance decision-making. In this paper, we propose a novel method, that advances the state of the art of fusing image information from different views, based on a special class of probabilistic graphical models, called causal independence models. The strength of this method is its ability to systematically and naturally capture uncertain domain knowledge, while performing information fusion in a computationally efficient way. We examine the value of the method for mammographic analysis and demonstrate its advantages in terms of explicit knowledge representation and accuracy (increase of at least 6.3% and 5.2% of true positive detection rates at 5% and 10% false positive rates) in comparison with previous single-view and multi-view systems, and benchmark fusion methods such as naïve Bayes and logistic regression.  相似文献   

11.
Multi-atlas label fusion is a powerful image segmentation strategy that is becoming increasingly popular in medical imaging. A standard label fusion algorithm relies on independently computed pairwise registrations between individual atlases and the (target) image to be segmented. These registrations are then used to propagate the atlas labels to the target space and fuse them into a single final segmentation. Such label fusion schemes commonly rely on the similarity between intensity values of the atlases and target scan, which is often problematic in medical imaging – in particular, when the atlases and target images are obtained via different sensor types or imaging protocols.In this paper, we present a generative probabilistic model that yields an algorithm for solving the atlas-to-target registrations and label fusion steps simultaneously. The proposed model does not directly rely on the similarity of image intensities. Instead, it exploits the consistency of voxel intensities within the target scan to drive the registration and label fusion, hence the atlases and target image can be of different modalities. Furthermore, the framework models the joint warp of all the atlases, introducing interdependence between the registrations.We use variational expectation maximization and the Demons registration framework in order to efficiently identify the most probable segmentation and registrations. We use two sets of experiments to illustrate the approach, where proton density (PD) MRI atlases are used to segment T1-weighted brain scans and vice versa. Our results clearly demonstrate the accuracy gain due to exploiting within-target intensity consistency and integrating registration into label fusion.  相似文献   

12.
Automatic tracking of viral structures displayed as small spots in fluorescence microscopy images is an important task to determine quantitative information about cellular processes. We introduce a novel probabilistic approach for tracking multiple particles based on multi-sensor data fusion and Bayesian smoothing methods. The approach exploits multiple measurements as in a particle filter, both detection-based measurements and prediction-based measurements from a Kalman filter using probabilistic data association with elliptical sampling. Compared to previous probabilistic tracking methods, our approach exploits separate uncertainties for the detection-based and prediction-based measurements, and integrates them by a sequential multi-sensor data fusion method. In addition, information from both past and future time points is taken into account by a Bayesian smoothing method in conjunction with the covariance intersection algorithm for data fusion. Also, motion information based on displacements is used to improve correspondence finding. Our approach has been evaluated on data of the Particle Tracking Challenge and yielded state-of-the-art results or outperformed previous approaches. We also applied our approach to challenging time-lapse fluorescence microscopy data of human immunodeficiency virus type 1 and hepatitis C virus proteins acquired with different types of microscopes and spatial-temporal resolutions. It turned out, that our approach outperforms existing methods.  相似文献   

13.
Regions in three-dimensional magnetic resonance (MR) brain images can be classified using protocols for manually segmenting and labeling structures. For large cohorts, time and expertise requirements make this approach impractical. To achieve automation, an individual segmentation can be propagated to another individual using an anatomical correspondence estimate relating the atlas image to the target image. The accuracy of the resulting target labeling has been limited but can potentially be improved by combining multiple segmentations using decision fusion. We studied segmentation propagation and decision fusion on 30 normal brain MR images, which had been manually segmented into 67 structures. Correspondence estimates were established by nonrigid registration using free-form deformations. Both direct label propagation and an indirect approach were tested. Individual propagations showed an average similarity index (SI) of 0.754+/-0.016 against manual segmentations. Decision fusion using 29 input segmentations increased SI to 0.836+/-0.009. For indirect propagation of a single source via 27 intermediate images, SI was 0.779+/-0.013. We also studied the effect of the decision fusion procedure using a numerical simulation with synthetic input data. The results helped to formulate a model that predicts the quality improvement of fused brain segmentations based on the number of individual propagated segmentations combined. We demonstrate a practicable procedure that exceeds the accuracy of previous automatic methods and can compete with manual delineations.  相似文献   

14.
15.

Purpose

Automated segmentation is required for radiotherapy treatment planning, and multi-atlas methods are frequently used for this purpose. The combination of multiple intermediate results from multi-atlas segmentation into a single segmentation map can be achieved by label fusion. A method that includes expert knowledge in the label fusion phase of multi-atlas-based segmentation was developed. The method was tested by application to prostate segmentation, and the accuracy was compared to standard techniques.

Methods

The selective and iterative method for performance level estimation (SIMPLE) algorithm for label fusion was modified with a weight map given by an expert that indicates the importance of each region in the evaluation of segmentation results. Voxel-based weights specified by an expert when performing the label fusion step in atlas-based segmentation were introduced into the modified SIMPLE algorithm. These weights incorporate expert knowledge on accuracy requirements in different regions of a segmentation. Using this knowledge, segmentation accuracy in regions known to be important can be improved by sacrificing segmentation accuracy in less important regions. Contextual information such as the presence of vulnerable tissue is then used in the segmentation process. This method using weight maps to fine-tune the result of multi-atlas-based segmentation was tested using a set of 146 atlas images consisting of an MR image of the lower abdomen and a prostate segmentation. Each image served as a target in a set of leave-one-out experiments. These experiments were repeated for a weight map derived from the clinical practice in our hospital.

Results

The segmentation accuracy increased 6 % in regions that border vulnerable tissue using expert-specified voxel-based weight maps. This was achieved at the cost of a 4 % decrease in accuracy in less clinically relevant regions.

Conclusion

The inclusion of expert knowledge in a multi-atlas-based segmentation procedure was shown to be feasible for prostate segmentation. This method allows an expert to ensure that automatic segmentation is most accurate in critical regions. This improved local accuracy can increase the practical value of automatic segmentation.  相似文献   

16.
We propose an endoscopic image mosaicking algorithm that is robust to light conditioning changes, specular reflections, and feature-less scenes. These conditions are especially common in minimally invasive surgery where the light source moves with the camera to dynamically illuminate close range scenes. This makes it difficult for a single image registration method to robustly track camera motion and then generate consistent mosaics of the expanded surgical scene across different and heterogeneous environments. Instead of relying on one specialised feature extractor or image registration method, we propose to fuse different image registration algorithms according to their uncertainties, formulating the problem as affine pose graph optimisation. This allows to combine landmarks, dense intensity registration, and learning-based approaches in a single framework. To demonstrate our application we consider deep learning-based optical flow, hand-crafted features, and intensity-based registration, however, the framework is general and could take as input other sources of motion estimation, including other sensor modalities. We validate the performance of our approach on three datasets with very different characteristics to highlighting its generalisability, demonstrating the advantages of our proposed fusion framework. While each individual registration algorithm eventually fails drastically on certain surgical scenes, the fusion approach flexibly determines which algorithms to use and in which proportion to more robustly obtain consistent mosaics.  相似文献   

17.
In this paper, we propose and validate a deep learning framework that incorporates both multi-atlas registration and level-set for segmenting pancreas from CT volume images. The proposed segmentation pipeline consists of three stages, namely coarse, fine, and refine stages. Firstly, a coarse segmentation is obtained through multi-atlas based 3D diffeomorphic registration and fusion. After that, to learn the connection feature, a 3D patch-based convolutional neural network (CNN) and three 2D slice-based CNNs are jointly used to predict a fine segmentation based on a bounding box determined from the coarse segmentation. Finally, a 3D level-set method is used, with the fine segmentation being one of its constraints, to integrate information of the original image and the CNN-derived probability map to achieve a refine segmentation. In other words, we jointly utilize global 3D location information (registration), contextual information (patch-based 3D CNN), shape information (slice-based 2.5D CNN) and edge information (3D level-set) in the proposed framework. These components form our cascaded coarse-fine-refine segmentation framework. We test the proposed framework on three different datasets with varying intensity ranges obtained from different resources, respectively containing 36, 82 and 281 CT volume images. In each dataset, we achieve an average Dice score over 82%, being superior or comparable to other existing state-of-the-art pancreas segmentation algorithms.  相似文献   

18.
The incorporation of intensity, spatial, and topological information into large-scale multi-region segmentation has been a topic of ongoing research in medical image analysis. Multi-region segmentation problems, such as segmentation of brain structures, pose unique challenges in image segmentation in which regions may not have a defined intensity, spatial, or topological distinction, but rely on a combination of the three. We propose a novel framework within the Advanced segmentation tools (ASETS)2, which combines large-scale Gaussian mixture models trained via Kohonen self-organizing maps, with deformable registration, and a convex max-flow optimization algorithm incorporating region topology as a hierarchy or tree. Our framework is validated on two publicly available neuroimaging datasets, the OASIS and MRBrainS13 databases, against the more conventional Potts model, achieving more accurate segmentations. Each component is accelerated using general-purpose programming on graphics processing Units to ensure computational feasibility.  相似文献   

19.

Background

Circulating biomarkers can facilitate sepsis diagnosis, enabling early management and improved outcomes. Procalcitonin (PCT) has been suggested to have superior diagnostic utility compared to other biomarkers.

Study Objectives

To define the discriminative value of PCT, interleukin-6 (IL-6), and C-reactive protein (CRP) for suspected sepsis.

Methods

PCT, CRP, and IL-6 were correlated with infection likelihood, sepsis severity, and septicemia. Multivariable models were constructed for length-of-stay and discharge to a higher level of care.

Results

Of 336 enrolled subjects, 60% had definite infection, 13% possible infection, and 27% no infection. Of those with infection, 202 presented with sepsis, 28 with severe sepsis, and 17 with septic shock. Overall, 21% of subjects were septicemic. PCT, IL6, and CRP levels were higher in septicemia (median PCT 2.3 vs. 0.2 ng/mL; IL-6 178 vs. 72 pg/mL; CRP 106 vs. 62 mg/dL; p < 0.001). Biomarker concentrations increased with likelihood of infection and sepsis severity. Using receiver operating characteristic analysis, PCT best predicted septicemia (0.78 vs. IL-6 0.70 and CRP 0.67), but CRP better identified clinical infection (0.75 vs. PCT 0.71 and IL-6 0.69). A PCT cutoff of 0.5 ng/mL had 72.6% sensitivity and 69.5% specificity for bacteremia, as well as 40.7% sensitivity and 87.2% specificity for diagnosing infection. A combined clinical-biomarker model revealed that CRP was marginally associated with length of stay (p = 0.015), but no biomarker independently predicted discharge to a higher level of care.

Conclusions

In adult emergency department patients with suspected sepsis, PCT, IL-6, and CRP highly correlate with several infection parameters, but are inadequately discriminating to be used independently as diagnostic tools.  相似文献   

20.
In hyperspectral images (HSI) classification, it is important to combine multiple features of a certain pixel in both spatial and spectral domains to improve the classification accuracy. To achieve this goal, this article proposes a novel spatial-spectral feature dimensionality reduction algorithm based on manifold learning. For each feature, a graph Laplacian matrix is constructed based on discriminative information from training samples, and then the graph Laplacian matrices of the various features are linearly combined using a set of empirically defined weights. Finally, the feature mapping is obtained by an eigen-decomposition problem. Based on the classification results of the public Indiana Airborne Visible Infrared Imaging Spectrometer dataset and Texas Hyperspectral Digital Imagery Collection Experiment data set, the technical accuracies show that our method achieves superior performance compared to some representative HSI feature extraction and dimensionality reduction algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号