首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 11 毫秒
1.
An early vision-based snake model for ultrasound image segmentation   总被引:10,自引:0,他引:10  
Due to the speckles and the ill-defined edges of the object of interest, the classic image-segmentation techniques are usually ineffective in segmenting ultrasound (US) images. In this paper, we present a new algorithm for segmenting general US images that is composed of two major techniques; namely, the early-vision model and the discrete-snake model. By simulating human early vision, the early-vision model can capture both grey-scale and textural edges while the speckle noise is suppressed. By performing deformation only on the peaks of the distance map, the discrete-snake model promises better noise immunity and more accurate convergence. Moreover, the constraint for most conventional snake models that the initial contour needs to be located very close to the actual boundary has been relaxed substantially. The performance of the proposed snake model has been shown to be comparable to manual delineation and superior to that of the gradient vector flow (GVF) snake model.  相似文献   

2.
Many estimation tasks require Bayesian classifiers capable of adjusting their performance (e.g. sensitivity/specificity). In situations where the optimal classification decision can be identified by an exhaustive search over all possible classes, means for adjusting classifier performance, such as probability thresholding or weighting the a posteriori probabilities, are well established. Unfortunately, analogous methods compatible with Markov random fields (i.e. large collections of dependent random variables) are noticeably absent from the literature. Consequently, most Markov random field (MRF) based classification systems typically restrict their performance to a single, static operating point (i.e. a paired sensitivity/specificity). To address this deficiency, we previously introduced an extension of maximum posterior marginals (MPM) estimation that allows certain classes to be weighted more heavily than others, thus providing a means for varying classifier performance. However, this extension is not appropriate for the more popular maximum a posteriori (MAP) estimation. Thus, a strategy for varying the performance of MAP estimators is still needed. Such a strategy is essential for several reasons: (1) the MAP cost function may be more appropriate in certain classification tasks than the MPM cost function, (2) the literature provides a surfeit of MAP estimation implementations, several of which are considerably faster than the typical Markov Chain Monte Carlo methods used for MPM, and (3) MAP estimation is used far more often than MPM. Consequently, in this paper we introduce multiplicative weighted MAP (MWMAP) estimation—achieved via the incorporation of multiplicative weights into the MAP cost function—which allows certain classes to be preferred over others. This creates a natural bias for specific classes, and consequently a means for adjusting classifier performance. Similarly, we show how this multiplicative weighting strategy can be applied to the MPM cost function (in place of the strategy we presented previously), yielding multiplicative weighted MPM (MWMPM) estimation. Furthermore, we describe how MWMAP and MWMPM can be implemented using adaptations of current estimation strategies such as iterated conditional modes and MPM Monte Carlo. To illustrate these implementations, we first integrate them into two separate MRF-based classification systems for detecting carcinoma of the prostate (CaP) on (1) digitized histological sections from radical prostatectomies and (2) T2-weighted 4 Tesla ex vivo prostate MRI. To highlight the extensibility of MWMAP and MWMPM to estimation tasks involving more than two classes, we also incorporate these estimation criteria into a MRF-based classifier used to segment synthetic brain MR images. In the context of these tasks, we show how our novel estimation criteria can be used to arbitrarily adjust the sensitivities of these systems, yielding receiver operator characteristic curves (and surfaces).  相似文献   

3.
Due to the serious speckle noise in synthetic aperture radar (SAR) image, segmentation of SAR images is still a challenging problem. In this paper, a novel region merging method based on perceptual hashing is proposed for SAR image segmentation. In the proposed method, perceptual hash algorithm (PHA) is utilized to calculate the degree of similarity between different regions during region merging in SAR image segmentation. After reducing the speckle noise by Lee filter which maintains the sharpness of SAR image, a set of different homogeneous regions is constructed based on multi-thresholding and treated as the input data of region merging. The new contribution of this paper is the combination of multi-thresholding for initial segmentation and perceptual hash method for the adaptive process of region merging, which preserves the texture feature of input images and reduces the time complexity of the proposed method. The experimental results on synthetic and real SAR images show that the proposed algorithm is faster and attains higher-quality segmentation results than the three recent state-of-the-art image segmentation methods.  相似文献   

4.
Xue Z  Shen D  Davatzikos C 《NeuroImage》2006,30(2):388-399
This paper proposes a temporally consistent and spatially adaptive longitudinal MR brain image segmentation algorithm, referred to as CLASSIC, which aims at obtaining accurate measurements of rates of change of regional and global brain volumes from serial MR images. The algorithm incorporates image-adaptive clustering, spatiotemporal smoothness constraints, and image warping to jointly segment a series of 3-D MR brain images of the same subject that might be undergoing changes due to development, aging, or disease. Morphological changes, such as growth or atrophy, are also estimated as part of the algorithm. Experimental results on simulated and real longitudinal MR brain images show both segmentation accuracy and longitudinal consistency.  相似文献   

5.
Strong prior models are a prerequisite for reliable spatio-temporal cardiac image analysis. While several cardiac models have been presented in the past, many of them are either too complex for their parameters to be estimated on the sole basis of MR Images, or overly simplified. In this paper, we present a novel dynamic model, based on the equation of dynamics for elastic materials and on Fourier filtering. The explicit use of dynamics allows us to enforce periodicity and temporal smoothness constraints. We propose an algorithm to solve the continuous dynamical problem associated to numerically adapting the model to the image sequence. Using a simple 1D example, we show how temporal filtering can help removing noise while ensuring the periodicity and smoothness of solutions. The proposed dynamic model is quantitatively evaluated on a database of 15 patients which shows its performance and limitations. Also, the ability of the model to capture cardiac motion is demonstrated on synthetic cardiac sequences. Moreover, existence, uniqueness of the solution and numerical convergence of the algorithm can be demonstrated.  相似文献   

6.
The quality of MRI time series data, which allows the study of dynamic processes, is often affected by confounding sources of signal fluctuation, including the cardiac and respiratory cycle. An adaptive filter is described, reducing these signal fluctuations as long as they are repetitive and their timing is known. The filter, applied in image domain, does not require temporal oversampling of the artifact-related fluctuations. Performance is demonstrated for suppression of cardiac and respiratory artifacts in 10-minute brain scans on 6 normal volunteers. Experimental parameters resemble a typical fMRI experiment (17 slices; 1700 ms TR). A second dataset was acquired at a rate well above the Nyquist frequency for both cardiac and respiratory cycle (single slice; 100 ms TR), allowing identification of artifacts specific to the cardiac and respiratory cycles, aiding assessment of filtering performance. Results show significant reduction in temporal standard deviation (SD(t)) in all subjects. For all 6 datasets with 1700 ms TR combined, the filtering method resulted in an average reduction in SD(t) of 9.2% in 2046 voxels substantially affected by respiratory artifacts, and 12.5% for the 864 voxels containing substantial cardiac artifacts. The maximal SD(t) reduction achieved was 52.7% for respiratory and 55.3% for cardiac filtering. Performance was found to be at least equivalent to the previously published RETROICOR method. Furthermore, the interaction between the filter and fMRI activity detection was investigated using Monte Carlo simulations, demonstrating that filtering algorithms introduce a systematic error in the detected BOLD-related signal change if applied sequentially. It is demonstrated that this can be overcome by combining physiological artifact filtering and detection of BOLD-related signal changes simultaneously. Visual fMRI data from 6 volunteers were analyzed with and without the filter proposed here. Inclusion of the cardio-respiratory regressors in the design matrix yielded a 4.6% t-score increase and 4.0% increase in the number of significantly activated voxels.  相似文献   

7.
In recent years, deep learning technology has shown superior performance in different fields of medical image analysis. Some deep learning architectures have been proposed and used for computational pathology classification, segmentation, and detection tasks. Due to their simple, modular structure, most downstream applications still use ResNet and its variants as the backbone network. This paper proposes a modular group attention block that can capture feature dependencies in medical images in two independent dimensions: channel and space. By stacking these group attention blocks in ResNet-style, we obtain a new ResNet variant called ResGANet. The stacked ResGANet architecture has 1.51–3.47 times fewer parameters than the original ResNet and can be directly used for downstream medical image segmentation tasks. Many experiments show that the proposed ResGANet is superior to state-of-the-art backbone models in medical image classification tasks. Applying it to different segmentation networks can improve the baseline model in medical image segmentation tasks without changing the network architecture. We hope that this work provides a promising method for enhancing the feature representation of convolutional neural networks (CNNs) in the future.  相似文献   

8.
The recent surge in performance for image analysis of digitised pathology slides can largely be attributed to the advances in deep learning. Deep models can be used to initially localise various structures in the tissue and hence facilitate the extraction of interpretable features for biomarker discovery. However, these models are typically trained for a single task and therefore scale poorly as we wish to adapt the model for an increasing number of different tasks. Also, supervised deep learning models are very data hungry and therefore rely on large amounts of training data to perform well. In this paper, we present a multi-task learning approach for segmentation and classification of nuclei, glands, lumina and different tissue regions that leverages data from multiple independent data sources. While ensuring that our tasks are aligned by the same tissue type and resolution, we enable meaningful simultaneous prediction with a single network. As a result of feature sharing, we also show that the learned representation can be used to improve the performance of additional tasks via transfer learning, including nuclear classification and signet ring cell detection. As part of this work, we train our developed Cerberus model on a huge amount of data, consisting of over 600 thousand objects for segmentation and 440 thousand patches for classification. We use our approach to process 599 colorectal whole-slide images from TCGA, where we localise 377 million, 900 thousand and 2.1 million nuclei, glands and lumina respectively. We make this resource available to remove a major barrier in the development of explainable models for computational pathology.  相似文献   

9.
Segmentation algorithms are typically evaluated by comparison to an accepted reference standard. The cost of generating accurate reference standards for medical image segmentation can be substantial. Since the study cost and the likelihood of detecting a clinically meaningful difference in accuracy both depend on the size and on the quality of the study reference standard, balancing these trade-offs supports the efficient use of research resources.In this work, we derive a statistical power calculation that enables researchers to estimate the appropriate sample size to detect clinically meaningful differences in segmentation accuracy (i.e. the proportion of voxels matching the reference standard) between two algorithms. Furthermore, we derive a formula to relate reference standard errors to their effect on the sample sizes of studies using lower-quality (but potentially more affordable and practically available) reference standards.The accuracy of the derived sample size formula was estimated through Monte Carlo simulation, demonstrating, with 95% confidence, a predicted statistical power within 4% of simulated values across a range of model parameters. This corresponds to sample size errors of less than 4 subjects and errors in the detectable accuracy difference less than 0.6%. The applicability of the formula to real-world data was assessed using bootstrap resampling simulations for pairs of algorithms from the PROMISE12 prostate MR segmentation challenge data set. The model predicted the simulated power for the majority of algorithm pairs within 4% for simulated experiments using a high-quality reference standard and within 6% for simulated experiments using a low-quality reference standard. A case study, also based on the PROMISE12 data, illustrates using the formulae to evaluate whether to use a lower-quality reference standard in a prostate segmentation study.  相似文献   

10.
Automatic and accurate segmentation of dental models is a fundamental task in computer-aided dentistry. Previous methods can achieve satisfactory segmentation results on normal dental models; however, they fail to robustly handle challenging clinical cases such as dental models with missing, crowding, or misaligned teeth before orthodontic treatments. In this paper, we propose a novel end-to-end learning-based method, called TSegNet, for robust and efficient tooth segmentation on 3D scanned point cloud data of dental models. Our algorithm detects all the teeth using a distance-aware tooth centroid voting scheme in the first stage, which ensures the accurate localization of tooth objects even with irregular positions on abnormal dental models. Then, a confidence-aware cascade segmentation module in the second stage is designed to segment each individual tooth and resolve ambiguities caused by aforementioned challenging cases. We evaluated our method on a large-scale real-world dataset consisting of dental models scanned before or after orthodontic treatments. Extensive evaluations, ablation studies and comparisons demonstrate that our method can generate accurate tooth labels robustly in various challenging cases and significantly outperforms state-of-the-art approaches by 6.5% of Dice Coefficient, 3.0% of F1 score in term of accuracy, while achieving 20 times speedup of computational time.  相似文献   

11.
A common framework is necessary for the transparent articulation of the benefits and risks of a therapeutic product across disparate stakeholders. The assignment of value and weighting to each component parameter presents challenges deriving from different stakeholder objectives, methods, and perspectives. Building on prior experiences with a validated framework approach, this forum focused on identifying challenges and approaches to the assignment of values and weightings using a case study applied to a hypothetical medicinal product.  相似文献   

12.
Whole abdominal organ segmentation is important in diagnosing abdomen lesions, radiotherapy, and follow-up. However, oncologists’ delineating all abdominal organs from 3D volumes is time-consuming and very expensive. Deep learning-based medical image segmentation has shown the potential to reduce manual delineation efforts, but it still requires a large-scale fine annotated dataset for training, and there is a lack of large-scale datasets covering the whole abdomen region with accurate and detailed annotations for the whole abdominal organ segmentation. In this work, we establish a new large-scale Whole abdominal ORgan Dataset (WORD) for algorithm research and clinical application development. This dataset contains 150 abdominal CT volumes (30495 slices). Each volume has 16 organs with fine pixel-level annotations and scribble-based sparse annotations, which may be the largest dataset with whole abdominal organ annotation. Several state-of-the-art segmentation methods are evaluated on this dataset. And we also invited three experienced oncologists to revise the model predictions to measure the gap between the deep learning method and oncologists. Afterwards, we investigate the inference-efficient learning on the WORD, as the high-resolution image requires large GPU memory and a long inference time in the test stage. We further evaluate the scribble-based annotation-efficient learning on this dataset, as the pixel-wise manual annotation is time-consuming and expensive. The work provided a new benchmark for the abdominal multi-organ segmentation task, and these experiments can serve as the baseline for future research and clinical application development.  相似文献   

13.
Deep learning techniques for 3D brain vessel image segmentation have not been as successful as in the segmentation of other organs and tissues. This can be explained by two factors. First, deep learning techniques tend to show poor performances at the segmentation of relatively small objects compared to the size of the full image. Second, due to the complexity of vascular trees and the small size of vessels, it is challenging to obtain the amount of annotated training data typically needed by deep learning methods. To address these problems, we propose a novel annotation-efficient deep learning vessel segmentation framework. The framework avoids pixel-wise annotations, only requiring weak patch-level labels to discriminate between vessel and non-vessel 2D patches in the training set, in a setup similar to the CAPTCHAs used to differentiate humans from bots in web applications. The user-provided weak annotations are used for two tasks: (1) to synthesize pixel-wise pseudo-labels for vessels and background in each patch, which are used to train a segmentation network, and (2) to train a classifier network. The classifier network allows to generate additional weak patch labels, further reducing the annotation burden, and it acts as a second opinion for poor quality images. We use this framework for the segmentation of the cerebrovascular tree in Time-of-Flight angiography (TOF) and Susceptibility-Weighted Images (SWI). The results show that the framework achieves state-of-the-art accuracy, while reducing the annotation time by 77% w.r.t. learning-based segmentation methods using pixel-wise labels for training.  相似文献   

14.
Acquisition of high quality manual annotations is vital for the development of segmentation algorithms. However, to create them we require a substantial amount of expert time and knowledge. Large numbers of labels are required to train convolutional neural networks due to the vast number of parameters that must be learned in the optimisation process. Here, we develop the STAMP algorithm to allow the simultaneous training and pruning of a UNet architecture for medical image segmentation with targeted channelwise dropout to make the network robust to the pruning. We demonstrate the technique across segmentation tasks and imaging modalities. It is then shown that, through online pruning, we are able to train networks to have much higher performance than the equivalent standard UNet models while reducing their size by more than 85% in terms of parameters. This has the potential to allow networks to be directly trained on datasets where very low numbers of labels are available.  相似文献   

15.
We propose a simple but generally applicable approach to improving the accuracy of automatic image segmentation algorithms relative to manual segmentations. The approach is based on the hypothesis that a large fraction of the errors produced by automatic segmentation are systematic, i.e., occur consistently from subject to subject, and serves as a wrapper method around a given host segmentation method. The wrapper method attempts to learn the intensity, spatial and contextual patterns associated with systematic segmentation errors produced by the host method on training data for which manual segmentations are available. The method then attempts to correct such errors in segmentations produced by the host method on new images. One practical use of the proposed wrapper method is to adapt existing segmentation tools, without explicit modification, to imaging data and segmentation protocols that are different from those on which the tools were trained and tuned. An open-source implementation of the proposed wrapper method is provided, and can be applied to a wide range of image segmentation problems. The wrapper method is evaluated with four host brain MRI segmentation methods: hippocampus segmentation using FreeSurfer (Fischl et al., 2002); hippocampus segmentation using multi-atlas label fusion (Artaechevarria et al., 2009); brain extraction using BET (Smith, 2002); and brain tissue segmentation using FAST (Zhang et al., 2001). The wrapper method generates 72%, 14%, 29% and 21% fewer erroneously segmented voxels than the respective host segmentation methods. In the hippocampus segmentation experiment with multi-atlas label fusion as the host method, the average Dice overlap between reference segmentations and segmentations produced by the wrapper method is 0.908 for normal controls and 0.893 for patients with mild cognitive impairment. Average Dice overlaps of 0.964, 0.905 and 0.951 are obtained for brain extraction, white matter segmentation and gray matter segmentation, respectively.  相似文献   

16.
Li  Ping  Deng  Wanling  Xue  Huadan  Xu  Kai  Zhu  Liang  Li  Juan  Sun  Zhaoyong  Jin  Zhengyu 《Abdominal imaging》2019,44(6):2196-2204
Abdominal Radiology - We evaluate the reliability and feasibility of weight-adapted ultra-low-dose pancreatic perfusion CT. A total of 100 (47 men, 53 women) patients were enrolled prospectively...  相似文献   

17.
This paper reports the development of an improved three-dimensional computer simulation model for evaluation of ultrasonic imaging systems. This model was used to successfully evaluate a signal processing method for improving lesion detection in ultrasound imaging. Linear processing of the rf signal amplitudes from a limited region of tissue was compared with the logarithmic compression employed by most commercial scanners. Two lesions were simulated by spherical distributions of scatterers having backscatter coefficients greater than the scatterers in the surrounding medium. Linear processing improved the differential contrast by a factor of about two. The simulation is based on the three-dimensional distribution of acoustic frequency spectra in a transducer beam and integration of scattered pulses from a corresponding three-dimensional array of scatterers. The simulation reported in previous papers depended upon physical measurement of the impulse response of a transducer. An original contribution described briefly herein, and in more detail in a companion article, is the addition of a model of the transducer's pulse waveform generation. Another new addition is the definition of a specific lesion detection task for objective assessment of a change in image quality following perturbation of some system parameter.  相似文献   

18.
An epidemiologic model is developed to describe the incidence, prevalence, and mortality of diabetes. Available data are reviewed, analyzed, and applied to the model. The model provides a framework for understanding diabetes on a population basis, and is useful in identifying needs and facilitating health care planning.  相似文献   

19.
Purpose  We present a new algorithm for nearly automatic liver segmentation and volume estimation from abdominal Computed Tomography Angiography (CTA) images and its validation. Materials and methods  Our hybrid algorithm uses a multiresolution iterative scheme. It starts from a single user-defined pixel seed inside the liver, and repeatedly applies smoothed Bayesian classification to identify the liver and other organs, followed by adaptive morphological operations and active contours refinement. We evaluate the algorithm with two retrospective studies on 56 validated CTA images. The first study compares it to ground-truth manual segmentation and semi-automatic and automatic commercial methods. The second study uses the public data-set SLIVER07 and its comparison methodology. Results  We achieved for both studies, correlations of 0.98 and 0.99 for liver volume estimation, with mean volume differences of 5.36 and 2.68% with respect to manual ground-truth estimation, and mean volume variability for different initial seeds of 0.54 and 0.004%, respectively. For the second study, our algorithm scored 71.8 and 67.87 for the training and test datasets, which compares very favorably with other semi-automatic methods. Conclusions  Our algorithm requires minimal interaction by a non-expert user, is accurate, efficient, and robust to initial seed selection. It can be effective for hepatic volume estimation and liver modeling in a clinical setup.  相似文献   

20.
Abstract

Considerable research over the last ten years has suggested that individuals who are less actualized or who see themselves as more discrepant with their ideals are prone to experience higher levels of death fear and anxiety than those who are relatively more actualized or have lower self-ideal discrepancy. Similarly, a large body of research employing the Threat Index has indicated that persons with more integrated conceptions of self and death show lower levels of death concern. Recently, it has been proposed that these two factors, actualization and integration, may have an additive effect on fear of death, such that highly actualized, highly integrated individuals will experience less death fear than other groups. The present study found no support for this additive model using a heterogeneous sample of adult respondents, although it did replicate earlier work demonstrating a modest linear relationship between both actualization and integration and various aspects of death orientation. Potential explanations for this finding are discussed, centering on the complementary relations that obtain between different aspects of death concern and the two predictor variables.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号