首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Renal compartment segmentation from Dynamic Contrast-Enhanced MRI (DCE-MRI) images is an important task for functional kidney evaluation. Despite advancement in segmentation methods, most of them focus on segmenting an entire kidney on CT images, there still lacks effective and automatic solutions for accurate segmentation of internal renal structures (i.e. cortex, medulla and renal pelvis) from DCE-MRI images. In this paper, we introduce a method for renal compartment segmentation which can robustly achieve high segmentation accuracy for a wide range of DCE-MRI data, and meanwhile requires little manual operations and parameter settings. The proposed method consists of five main steps. First, we pre-process the image time series to reduce the motion artifacts caused by the movement of the patients during the scans and enhance the kidney regions. Second, the kidney is segmented as a whole based on the concept of Maximally Stable Temporal Volume (MSTV). The proposed MSTV detects anatomical structures that are homogeneous in the spatial domain and stable in terms of temporal dynamics. MSTV-based kidney segmentation is robust to noises and does not require a training phase. It can well adapt to kidney shape variations caused by renal dysfunction. Third, voxels in the segmented kidney are described by principal components (PCs) to remove temporal redundancy and noises. And then k-means clustering of PCs is applied to separate voxels into multiple clusters. Fourth, the clusters are automatically labeled as cortex, medulla and pelvis based on voxels’ geometric locations and intensity distribution. Finally, an iterative refinement method is introduced to further remove noises in each segmented compartment. Experiments on 14 real clinical kidney datasets and 12 synthetic dataset demonstrate that results produced by our method match very well with those segmented manually and the performance of our method is superior to the other five existing methods.  相似文献   

2.
This work demonstrates encouraging results for increasing the automation of a practical and precise magnetic resonance brain image segmentation method. The intensity threshold for segmenting the brain exterior is determined automatically by locating the choroid plexus. This is done by finding peaks in a series of histograms taken over regions specified using anatomical knowledge. Intensity inhomogeneities are accounted for by adjusting the global intensity to match the white matter peak intensity in local regions. Automated results are incorporated into the established manually guided segmentation method by providing a trained expert with the automated threshold. The results from 20 different brain scans (over 1000 images) obtained under different conditions are presented to validate the method which was able to determine the appropriate threshold in approximately 80% of the data.  相似文献   

3.
《Medical image analysis》2014,18(7):1247-1259
PET imaging with FluoroDesoxyGlucose (FDG) tracer is clinically used for the definition of Biological Target Volumes (BTVs) for radiotherapy. Recently, new tracers, such as FLuoroThymidine (FLT) or FluoroMisonidazol (FMiso), have been proposed. They provide complementary information for the definition of BTVs. Our work is to fuse multi-tracer PET images to obtain a good BTV definition and to help the radiation oncologist in dose painting. Due to the noise and the partial volume effect leading, respectively, to the presence of uncertainty and imprecision in PET images, the segmentation and the fusion of PET images is difficult. In this paper, a framework based on Belief Function Theory (BFT) is proposed for the segmentation of BTV from multi-tracer PET images. The first step is based on an extension of the Evidential C-Means (ECM) algorithm, taking advantage of neighboring voxels for dealing with uncertainty and imprecision in each mono-tracer PET image. Then, imprecision and uncertainty are, respectively, reduced using prior knowledge related to defects in the acquisition system and neighborhood information. Finally, a multi-tracer PET image fusion is performed. The results are represented by a set of parametric maps that provide important information for dose painting. The performances are evaluated on PET phantoms and patient data with lung cancer. Quantitative results show good performance of our method compared with other methods.  相似文献   

4.
《Medical image analysis》2014,18(2):330-342
In this paper, we address the retrieval of multi-modality medical volumes, which consist of two different imaging modalities, acquired sequentially, from the same scanner. One such example, positron emission tomography and computed tomography (PET-CT), provides physicians with complementary functional and anatomical features as well as spatial relationships and has led to improved cancer diagnosis, localisation, and staging.The challenge of multi-modality volume retrieval for cancer patients lies in representing the complementary geometric and topologic attributes between tumours and organs. These attributes and relationships, which are used for tumour staging and classification, can be formulated as a graph. It has been demonstrated that graph-based methods have high accuracy for retrieval by spatial similarity. However, naïvely representing all relationships on a complete graph obscures the structure of the tumour-anatomy relationships.We propose a new graph structure derived from complete graphs that structurally constrains the edges connected to tumour vertices based upon the spatial proximity of tumours and organs. This enables retrieval on the basis of tumour localisation. We also present a similarity matching algorithm that accounts for different feature sets for graph elements from different imaging modalities. Our method emphasises the relationships between a tumour and related organs, while still modelling patient-specific anatomical variations. Constraining tumours to related anatomical structures improves the discrimination potential of graphs, making it easier to retrieve similar images based on tumour location.We evaluated our retrieval methodology on a dataset of clinical PET-CT volumes. Our results showed that our method enabled the retrieval of multi-modality images using spatial features. Our graph-based retrieval algorithm achieved a higher precision than several other retrieval techniques: gray-level histograms as well as state-of-the-art methods such as visual words using the scale- invariant feature transform (SIFT) and relational matrices representing the spatial arrangements of objects.  相似文献   

5.
《Medical image analysis》2015,20(1):98-109
Multi-atlas segmentation infers the target image segmentation by combining prior anatomical knowledge encoded in multiple atlases. It has been quite successfully applied to medical image segmentation in the recent years, resulting in highly accurate and robust segmentation for many anatomical structures. However, to guide the label fusion process, most existing multi-atlas segmentation methods only utilise the intensity information within a small patch during the label fusion process and may neglect other useful information such as gradient and contextual information (the appearance of surrounding regions). This paper proposes to combine the intensity, gradient and contextual information into an augmented feature vector and incorporate it into multi-atlas segmentation. Also, it explores the alternative to the K nearest neighbour (KNN) classifier in performing multi-atlas label fusion, by using the support vector machine (SVM) for label fusion instead. Experimental results on a short-axis cardiac MR data set of 83 subjects have demonstrated that the accuracy of multi-atlas segmentation can be significantly improved by using the augmented feature vector. The mean Dice metric of the proposed segmentation framework is 0.81 for the left ventricular myocardium on this data set, compared to 0.79 given by the conventional multi-atlas patch-based segmentation (Coupé et al., 2011; Rousseau et al., 2011). A major contribution of this paper is that it demonstrates that the performance of non-local patch-based segmentation can be improved by using augmented features.  相似文献   

6.
Myocardial pathology segmentation (MyoPS) can be a prerequisite for the accurate diagnosis and treatment planning of myocardial infarction. However, achieving this segmentation is challenging, mainly due to the inadequate and indistinct information from an image. In this work, we develop an end-to-end deep neural network, referred to as MyoPS-Net, to flexibly combine five-sequence cardiac magnetic resonance (CMR) images for MyoPS. To extract precise and adequate information, we design an effective yet flexible architecture to extract and fuse cross-modal features. This architecture can tackle different numbers of CMR images and complex combinations of modalities, with output branches targeting specific pathologies. To impose anatomical knowledge on the segmentation results, we first propose a module to regularize myocardium consistency and localize the pathologies, and then introduce an inclusiveness loss to utilize relations between myocardial scars and edema. We evaluated the proposed MyoPS-Net on two datasets, i.e., a private one consisting of 50 paired multi-sequence CMR images and a public one from MICCAI2020 MyoPS Challenge. Experimental results showed that MyoPS-Net could achieve state-of-the-art performance in various scenarios. Note that in practical clinics, the subjects may not have full sequences, such as missing LGE CMR or mapping CMR scans. We therefore conducted extensive experiments to investigate the performance of the proposed method in dealing with such complex combinations of different CMR sequences. Results proved the superiority and generalizability of MyoPS-Net, and more importantly, indicated a practical clinical application. The code has been released via https://github.com/QJYBall/MyoPS-Net.  相似文献   

7.
《Medical image analysis》2014,18(7):1233-1246
Osteoarthritis (OA) is the most common form of joint disease and often characterized by cartilage changes. Accurate quantitative methods are needed to rapidly screen large image databases to assess changes in cartilage morphology. We therefore propose a new automatic atlas-based cartilage segmentation method for future automatic OA studies.Atlas-based segmentation methods have been demonstrated to be robust and accurate in brain imaging and therefore also hold high promise to allow for reliable and high-quality segmentations of cartilage. Nevertheless, atlas-based methods have not been well explored for cartilage segmentation. A particular challenge is the thinness of cartilage, its relatively small volume in comparison to surrounding tissue and the difficulty to locate cartilage interfaces – for example the interface between femoral and tibial cartilage.This paper focuses on the segmentation of femoral and tibial cartilage, proposing a multi-atlas segmentation strategy with non-local patch-based label fusion which can robustly identify candidate regions of cartilage. This method is combined with a novel three-label segmentation method which guarantees the spatial separation of femoral and tibial cartilage, and ensures spatial regularity while preserving the thin cartilage shape through anisotropic regularization. Our segmentation energy is convex and therefore guarantees globally optimal solutions.We perform an extensive validation of the proposed method on 706 images of the Pfizer Longitudinal Study. Our validation includes comparisons of different atlas segmentation strategies, different local classifiers, and different types of regularizers. To compare to other cartilage segmentation approaches we validate based on the 50 images of the SKI10 dataset.  相似文献   

8.
Purpose: Positron Emission Tomography (PET) has the unique capability of measuring brain function but its clinical potential is affected by low resolution and lack of morphological detail. Here we propose and evaluate a wavelet synergistic approach that combines functional and structural information from a number of sources (CT, MRI and anatomical probabilistic atlases) for the accurate quantitative recovery of radioactivity concentration in PET images. When the method is combined with anatomical probabilistic atlases, the outcome is a functional volume corrected for partial volume effects.Methods: The proposed method is based on the multiresolution property of the wavelet transform. First, the target PET image and the corresponding anatomical image (CT/MRI/atlas-based segmented MRI) are decomposed into several resolution elements. Secondly, high-resolution components of the PET image are replaced, in part, with those of the anatomical image after appropriate scaling. The amount of structural input is weighted by the relative high frequency signal content of the two modalities. The method was validated on a digital Zubal phantom and clinical data to evaluate its quantitative potential.Results: Simulation studies showed the expected relationship between functional recovery and the amount of correct structural detail provided, with perfect recovery achieved when true images were used as anatomical reference. The use of T1-MRI images brought significant improvements in PET image resolution. However improvements were maximized when atlas-based segmented images as anatomical references were used; these results were replicated in clinical data sets.Conclusion: The synergistic use of functional and structural data, and the incorporation of anatomical probabilistic information in particular, generates morphologically corrected PET images of exquisite quality.  相似文献   

9.
Prostate segmentation aids in prostate volume estimation, multi-modal image registration, and to create patient specific anatomical models for surgical planning and image guided biopsies. However, manual segmentation is time consuming and suffers from inter-and intra-observer variabilities. Low contrast images of trans rectal ultrasound and presence of imaging artifacts like speckle, micro-calcifications, and shadow regions hinder computer aided automatic or semi-automatic prostate segmentation. In this paper, we propose a prostate segmentation approach based on building multiple mean parametric models derived from principal component analysis of shape and posterior probabilities in a multi-resolution framework. The model parameters are then modified with the prior knowledge of the optimization space to achieve optimal prostate segmentation. In contrast to traditional statistical models of shape and intensity priors, we use posterior probabilities of the prostate region determined from random forest classification to build our appearance model, initialize and propagate our model. Furthermore, multiple mean models derived from spectral clustering of combined shape and appearance parameters are applied in parallel to improve segmentation accuracies. The proposed method achieves mean Dice similarity coefficient value of 0.91 ± 0.09 for 126 images containing 40 images from the apex, 40 images from the base and 46 images from central regions in a leave-one-patient-out validation framework. The mean segmentation time of the procedure is 0.67 ± 0.02 s.  相似文献   

10.
提出一种综合应用图像分割与互信息的医学图像自动配准方法.首先采用门限法和数学形态学方法进行预处理,再用k-means方法进行分割,之后采用基于互信息的Powell优化方法配准.将该方法用于磁共振图像(MRI)和正电子发射断层扫描(PET)临床医学图像配准,得到较满意的效果.  相似文献   

11.
We propose a framework for the robust and fully-automatic segmentation of magnetic resonance (MR) brain images called “Multi-Atlas Label Propagation with Expectation–Maximisation based refinement” (MALP-EM). The presented approach is based on a robust registration approach (MAPER), highly performant label fusion (joint label fusion) and intensity-based label refinement using EM. We further adapt this framework to be applicable for the segmentation of brain images with gross changes in anatomy. We propose to account for consistent registration errors by relaxing anatomical priors obtained by multi-atlas propagation and a weighting scheme to locally combine anatomical atlas priors and intensity-refined posterior probabilities. The method is evaluated on a benchmark dataset used in a recent MICCAI segmentation challenge. In this context we show that MALP-EM is competitive for the segmentation of MR brain scans of healthy adults when compared to state-of-the-art automatic labelling techniques. To demonstrate the versatility of the proposed approach, we employed MALP-EM to segment 125 MR brain images into 134 regions from subjects who had sustained traumatic brain injury (TBI). We employ a protocol to assess segmentation quality if no manual reference labels are available. Based on this protocol, three independent, blinded raters confirmed on 13 MR brain scans with pathology that MALP-EM is superior to established label fusion techniques. We visually confirm the robustness of our segmentation approach on the full cohort and investigate the potential of derived symmetry-based imaging biomarkers that correlate with and predict clinically relevant variables in TBI such as the Marshall Classification (MC) or Glasgow Outcome Score (GOS). Specifically, we show that we are able to stratify TBI patients with favourable outcomes from non-favourable outcomes with 64.7% accuracy using acute-phase MR images and 66.8% accuracy using follow-up MR images. Furthermore, we are able to differentiate subjects with the presence of a mass lesion or midline shift from those with diffuse brain injury with 76.0% accuracy. The thalamus, putamen, pallidum and hippocampus are particularly affected. Their involvement predicts TBI disease progression.  相似文献   

12.
Purpose A system for luminal contour segmentation in intravascular ultrasound images is proposed. Methods Moment-based texture features are used for clustering of the pixels in the input image. After the clustering, morphological smoothing and a boundary detection process are applied and the final image is obtained. Results The proposed method was applied to 15 images from different patients, and a correlation coefficient of 0.86 was obtained between the areas of lumen automatically and manually defined. Conclusion Moment-based texture features together with the radial feature are powerful tools for identification of the lumen region in intravascular ultrasound images. Morphological filtering was useful for improving the segmentation results.  相似文献   

13.
This study aimed to show segmentation of the heart muscle in pediatric echocardiographic images as a preprocessing step for tissue analysis. Transthoracic image sequences (2-D and 3-D volume data, both derived in radiofrequency format, directly after beam forming) were registered in real time from four healthy children over three heart cycles. Three preprocessing methods, based on adaptive filtering, were used to reduce the speckle noise for optimizing the distinction between blood and myocardium, while preserving the sharpness of edges between anatomical structures. The filtering kernel size was linked to the local speckle size and the speckle noise characteristics were considered to define the optimal filter in one of the methods. The filtered 2-D images were thresholded automatically as a first step of segmentation of the endocardial wall. The final segmentation step was achieved by applying a deformable contour algorithm. This segmentation of each 2-D image of the 3-D+time (i.e., 4-D) datasets was related to that of the neighboring images in both time and space. By thus incorporating spatial and temporal information of 3-D ultrasound image sequences, an automated method using image statistics was developed to perform 3-D segmentation of the heart muscle.  相似文献   

14.
基于MeanShift方法的肝脏CT图像的自动分割   总被引:1,自引:1,他引:0  
目的 探讨基于Mean Shift方法的肝脏CT图像的自动分割算法,以实现肝脏的自动分割。方法 首先对原始图像进行单次Mean Shift平滑 ,滤除噪声的影响以增强算法的鲁棒性,然后通过Mean Shift迭代自动选取初始种子点,最后采用基于区域生长的方法实现肝脏CT图像的自动分割。结果 实验证明此方法是一个准确、快速和有效的肝脏自动分割方法。结论 采用本文中提出的方法,可有效地实现肝脏的自动分割。  相似文献   

15.
White matter hyperintensities (WMHs) have been associated with various cerebrovascular and neurodegenerative diseases. Reliable quantification of WMHs is essential for understanding their clinical impact in normal and pathological populations. Automated segmentation of WMHs is highly challenging due to heterogeneity in WMH characteristics between deep and periventricular white matter, presence of artefacts and differences in the pathology and demographics of populations. In this work, we propose an ensemble triplanar network that combines the predictions from three different planes of brain MR images to provide an accurate WMH segmentation. In the loss functions the network uses anatomical information regarding WMH spatial distribution in loss functions, to improve the efficiency of segmentation and to overcome the contrast variations between deep and periventricular WMHs. We evaluated our method on 5 datasets, of which 3 are part of a publicly available dataset (training data for MICCAI WMH Segmentation Challenge 2017 - MWSC 2017) consisting of subjects from three different cohorts, and we also submitted our method to MWSC 2017 to be evaluated on the unseen test datasets. On evaluating our method separately in deep and periventricular regions, we observed robust and comparable performance in both regions. Our method performed better than most of the existing methods, including FSL BIANCA, and on par with the top ranking deep learning methods of MWSC 2017.  相似文献   

16.

Purpose

Patient-specific models of anatomical structures and pathologies generated from volumetric medical images play an increasingly central role in many aspects of patient care. A key task in generating these models is the segmentation of anatomical structures and pathologies of interest. Although numerous segmentation methods are available, they often produce erroneous delineations that require time-consuming modifications.

Methods

   We present a new geometry-based algorithm for the reliable detection and correction of segmentation errors in volumetric medical images. The method is applicable to anatomical structures consisting of a few 3D star-shaped components. First, it detects segmentation errors by casting rays from the initial segmentation interior to its outer surface. It then classifies the segmentation surface into correct and erroneous regions by minimizing an energy functional that incorporates first- and second-order properties of the rays lengths. Finally, it corrects the segmentation errors by computing new locations for the erroneous surface points by Laplace deformation so that the new surface has maximum smoothness with respect to the rays-length gradient magnitude.

Results

   Our evaluation on initial segmentations of 16 abdominal aortic aneurysm and 12 lung tumors in CT scans obtained by both adaptive region-growing and active contours level-set segmentation improved the volumetric overlap error by 66 and 70.5 % respectively, with respect to the ground-truth.

Conclusions

   The advantages of our method are that it is independent of the initial segmentation algorithm that covers a variety of anatomical structures and pathologies, that it does not require a shape prior, and that it requires minimal user interaction.
  相似文献   

17.
Purpose Content-based image retrieval (CBIR) in medicine has been demonstrated to improve evidence-based diagnosis, education, and teaching. However, the low clinical adoption of CBIR is partially because the focus of most studies has been the development of feature extraction and similarity measurement algorithms with limited work on facilitating better understanding of the similarity between complex volumetric and multi-modality medical images. In this paper, we present a method for defining user interfaces (UIs) that enable effective human user interpretation of retrieved images. Methods We derived a set of visualisation and interaction requirements based on the characteristics of modern volumetric medical images. We implemented a UI that visualised multiple views of a single image, displayed abstractions of image data, and provided access to supplementary non-image data. We also defined interactions for refining the search and visually indicating the similarities between images. We applied the UI for the retrieval of multi-modality positron emission tomography and computed tomography (PET-CT) images. We conducted a user survey to evaluate the capabilities of our UI. Results Our proposed method obtained a high rating ( $\ge $ 4 out of 5) in the majority of survey questions. In particular, the survey responses indicated the UI presented all the information necessary to understand the retrieved images, and did so in an intuitive manner. Conclusion Our proposed UI design improved the ability of users to interpret and understand the similarity between retrieved PET-CT images. The implementation of CBIR UIs designed to assist human interpretation could facilitate wider adoption of medical CBIR systems.  相似文献   

18.
Tumor response to treatment varies among patients. Patient-specific prediction of tumor evolution based on medical images during the treatment can help to build and adapt patient’s treatment planning in a non-invasive way. Personalized tumor growth modeling allows patient-specific prediction by estimating model parameters based on individual’s images. The model parameters are often estimated by optimizing a cost function constructed based on the tumor delineations. In this paper, we propose a joint framework for tumor growth prediction and tumor segmentation in the context of patient’s therapeutic follow ups. Throughout the treatment, a series of sequential positron emission tomography (PET) images are acquired for tumor response monitoring. We propose to take into account the predicted information, which is used in combination with the random walks (RW) algorithm, to develop an automatic tumor segmentation method on PET images. Moreover, we propose an iterative scheme of RW, making the segmentation more performant. Furthermore, the obtained segmentation is applied to the process of model parameter estimation so as to get the model based prediction of tumor evolution. We evaluate our methods on 7 lung tumor patients, totaling 29 PET exams, under radiotherapy by comparing the obtained tumor prediction and tumor segmentation with manual tumor delineation by expert. Our system produces promising results when compared to the state-of-the-art methods.  相似文献   

19.
《Medical image analysis》2014,18(3):591-604
Labeling a histopathology image as having cancerous regions or not is a critical task in cancer diagnosis; it is also clinically important to segment the cancer tissues and cluster them into various classes. Existing supervised approaches for image classification and segmentation require detailed manual annotations for the cancer pixels, which are time-consuming to obtain. In this paper, we propose a new learning method, multiple clustered instance learning (MCIL) (along the line of weakly supervised learning) for histopathology image segmentation. The proposed MCIL method simultaneously performs image-level classification (cancer vs. non-cancer image), medical image segmentation (cancer vs. non-cancer tissue), and patch-level clustering (different classes). We embed the clustering concept into the multiple instance learning (MIL) setting and derive a principled solution to performing the above three tasks in an integrated framework. In addition, we introduce contextual constraints as a prior for MCIL, which further reduces the ambiguity in MIL. Experimental results on histopathology colon cancer images and cytology images demonstrate the great advantage of MCIL over the competing methods.  相似文献   

20.
《Medical image analysis》2014,18(3):487-499
In this paper, we propose a new method for fully-automatic landmark detection and shape segmentation in X-ray images. To detect landmarks, we estimate the displacements from some randomly sampled image patches to the (unknown) landmark positions, and then we integrate these predictions via a voting scheme. Our key contribution is a new algorithm for estimating these displacements. Different from other methods where each image patch independently predicts its displacement, we jointly estimate the displacements from all patches together in a data driven way, by considering not only the training data but also geometric constraints on the test image. The displacements estimation is formulated as a convex optimization problem that can be solved efficiently. Finally, we use the sparse shape composition model as the a priori information to regularize the landmark positions and thus generate the segmented shape contour. We validate our method on X-ray image datasets of three different anatomical structures: complete femur, proximal femur and pelvis. Experiments show that our method is accurate and robust in landmark detection, and, combined with the shape model, gives a better or comparable performance in shape segmentation compared to state-of-the art methods. Finally, a preliminary study using CT data shows the extensibility of our method to 3D data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号