首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper describes a framework for automatic brain tumor segmentation from MR images. The detection of edema is done simultaneously with tumor segmentation, as the knowledge of the extent of edema is important for diagnosis, planning, and treatment. Whereas many other tumor segmentation methods rely on the intensity enhancement produced by the gadolinium contrast agent in the T1-weighted image, the method proposed here does not require contrast enhanced image channels. The only required input for the segmentation procedure is the T2 MR image channel, but it can make use of any additional non-enhanced image channels for improved tissue segmentation. The segmentation framework is composed of three stages. First, we detect abnormal regions using a registered brain atlas as a model for healthy brains. We then make use of the robust estimates of the location and dispersion of the normal brain tissue intensity clusters to determine the intensity properties of the different tissue types. In the second stage, we determine from the T2 image intensities whether edema appears together with tumor in the abnormal regions. Finally, we apply geometric and spatial constraints to the detected tumor and edema regions. The segmentation procedure has been applied to three real datasets, representing different tumor shapes, locations, sizes, image intensities, and enhancement.  相似文献   

2.
Jue Wu  Albert C.S. Chung   《NeuroImage》2009,46(4):1027-1036
The aim of this work is to develop a new framework for multi-object segmentation of deep brain structures (caudate nucleus, putamen and thalamus) in medical brain images. Deep brain segmentation is difficult and challenging because the structures of interest are of relatively small size and have significant shape variations. The structure boundaries may be blurry or even missing, and the surrounding background is full of irrelevant edges. To tackle these problems, we propose a template-based framework to fuse the information of edge features, region statistics and inter-structure constraints for detecting and locating all target brain structures such that initialization by hand is unnecessary. The multi-object template is organized in the form of a hierarchical Markov dependence tree (MDT), and multiple objects are efficiently matched to a target image by a top-to-down optimization strategy. The final segmentation is obtained through refinement by a B-spline based non-rigid registration between the exemplar image and the target image. Our approach needs only one example as training data. We have validated the proposed method on a publicly available T1-weighted magnetic resonance image database with expert-segmented brain structures. In the experiments, the proposed approach has obtained encouraging results with 0.80 Dice score for the caudate nuclei, 0.81 Dice score for the putamina and 0.84 Dice score for the thalami on average.  相似文献   

3.
Liu T  Li H  Wong K  Tarokh A  Guo L  Wong ST 《NeuroImage》2007,38(1):114-123
We present a method for automated brain tissue segmentation based on the multi-channel fusion of diffusion tensor imaging (DTI) data. The method is motivated by the evidence that independent tissue segmentation based on DTI parametric images provides complementary information of tissue contrast to the tissue segmentation based on structural MRI data. This has important applications in defining accurate tissue maps when fusing structural data with diffusion data. In the absence of structural data, tissue segmentation based on DTI data provides an alternative means to obtain brain tissue segmentation. Our approach to the tissue segmentation based on DTI data is to classify the brain into two compartments by utilizing the tissue contrast existing in a single channel. Specifically, because the apparent diffusion coefficient (ADC) values in the cerebrospinal fluid (CSF) are more than twice that of gray matter (GM) and white matter (WM), we use ADC images to distinguish CSF and non-CSF tissues. Additionally, fractional anisotropy (FA) images are used to separate WM from non-WM tissues, as highly directional white matter structures have much larger fractional anisotropy values. Moreover, other channels to separate tissue are explored, such as eigenvalues of the tensor, relative anisotropy (RA), and volume ratio (VR). We developed an approach based on the Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm that combines these two-class maps to obtain a complete tissue segmentation map of CSF, GM, and WM. Evaluations are provided to demonstrate the performance of our approach. Experimental results of applying this approach to brain tissue segmentation and deformable registration of DTI data and spoiled gradient-echo (SPGR) data are also provided.  相似文献   

4.
In the frame of 3D medical imaging, accurate segmentation of multimodal brain MR images is of interest for many brain disorders. However, due to several factors such as noise, imaging artifacts, intrinsic tissue variation and partial volume effects, tissue classification remains a challenging task. In this paper, we present a unifying framework for unsupervised segmentation of multimodal brain MR images including partial volume effect, bias field correction, and information given by a probabilistic atlas. Here-proposed method takes into account neighborhood information using a Hidden Markov Chain (HMC) model. Due to the limited resolution of imaging devices, voxels may be composed of a mixture of different tissue types, this partial volume effect is included to achieve an accurate segmentation of brain tissues. Instead of assigning each voxel to a single tissue class (i.e., hard classification), we compute the relative amount of each pure tissue class in each voxel (mixture estimation). Further, a bias field estimation step is added to the proposed algorithm to correct intensity inhomogeneities. Furthermore, atlas priors were incorporated using probabilistic brain atlas containing prior expectations about the spatial localization of different tissue classes. This atlas is considered as a complementary sensor and the proposed method is extended to multimodal brain MRI without any user-tunable parameter (unsupervised algorithm). To validate this new unifying framework, we present experimental results on both synthetic and real brain images, for which the ground truth is available. Comparison with other often used techniques demonstrates the accuracy and the robustness of this new Markovian segmentation scheme.  相似文献   

5.
The incorporation of intensity, spatial, and topological information into large-scale multi-region segmentation has been a topic of ongoing research in medical image analysis. Multi-region segmentation problems, such as segmentation of brain structures, pose unique challenges in image segmentation in which regions may not have a defined intensity, spatial, or topological distinction, but rely on a combination of the three. We propose a novel framework within the Advanced segmentation tools (ASETS)2, which combines large-scale Gaussian mixture models trained via Kohonen self-organizing maps, with deformable registration, and a convex max-flow optimization algorithm incorporating region topology as a hierarchy or tree. Our framework is validated on two publicly available neuroimaging datasets, the OASIS and MRBrainS13 databases, against the more conventional Potts model, achieving more accurate segmentations. Each component is accelerated using general-purpose programming on graphics processing Units to ensure computational feasibility.  相似文献   

6.
A hybrid framework for 3D medical image segmentation   总被引:5,自引:0,他引:5  
In this paper we propose a novel hybrid 3D segmentation framework which combines Gibbs models, marching cubes and deformable models. In the framework, first we construct a new Gibbs model whose energy function is defined on a high order clique system. The new model includes both region and boundary information during segmentation. Next we improve the original marching cubes method to construct 3D meshes from Gibbs models' output. The 3D mesh serves as the initial geometry of the deformable model. Then we deform the deformable model using external image forces so that the model converges to the object surface. We run the Gibbs model and the deformable model recursively by updating the Gibbs model's parameters using the region and boundary information in the deformable model segmentation result. In our approach, the hybrid combination of region-based methods and boundary-based methods results in improved segmentations of complex structures. The benefit of the methodology is that it produces high quality segmentations of 3D structures using little prior information and minimal user intervention. The modules in this segmentation methodology are developed within the context of the Insight ToolKit (ITK). We present experimental segmentation results of brain tumors and evaluate our method by comparing experimental results with expert manual segmentations. The evaluation results show that the methodology achieves high quality segmentation results with computational efficiency. We also present segmentation results of other clinical objects to illustrate the strength of the methodology as a generic segmentation framework.  相似文献   

7.
8.
Auroral oval segmentation using dual level set based on local information   总被引:1,自引:0,他引:1  
The extraction of auroral ovals from images acquired by the Ultraviolet Imager is notoriously difficult due to low contrast, inhomogeneity and the presence of dayglow. In this paper, to address these issues, we propose an improved level set segmentation algorithm by incorporating the shape feature and intensity distribution of the auroral oval into the variational framework. The shape term is proposed to keep the annular ring appearance of the auroral oval and avoid the boundary leak by imposing a distance constraint on the inner and outer boundaries. The local information term tackles the difficulty of intensity inhomogeneity by utilizing the statistical distribution in the local window. Experimental results demonstrate that the proposed method obtains more accurate inner and outer boundaries of the auroral oval, comparing to the existing pixel based and level set based methods.  相似文献   

9.
Multi-atlas label fusion is a powerful image segmentation strategy that is becoming increasingly popular in medical imaging. A standard label fusion algorithm relies on independently computed pairwise registrations between individual atlases and the (target) image to be segmented. These registrations are then used to propagate the atlas labels to the target space and fuse them into a single final segmentation. Such label fusion schemes commonly rely on the similarity between intensity values of the atlases and target scan, which is often problematic in medical imaging – in particular, when the atlases and target images are obtained via different sensor types or imaging protocols.In this paper, we present a generative probabilistic model that yields an algorithm for solving the atlas-to-target registrations and label fusion steps simultaneously. The proposed model does not directly rely on the similarity of image intensities. Instead, it exploits the consistency of voxel intensities within the target scan to drive the registration and label fusion, hence the atlases and target image can be of different modalities. Furthermore, the framework models the joint warp of all the atlases, introducing interdependence between the registrations.We use variational expectation maximization and the Demons registration framework in order to efficiently identify the most probable segmentation and registrations. We use two sets of experiments to illustrate the approach, where proton density (PD) MRI atlases are used to segment T1-weighted brain scans and vice versa. Our results clearly demonstrate the accuracy gain due to exploiting within-target intensity consistency and integrating registration into label fusion.  相似文献   

10.
Pancreatic cancer is a malignant tumor, and its high recurrence rate after surgery is related to the lymph node metastasis status. In clinical practice, a preoperative imaging prediction method is necessary for prognosis assessment and treatment decision; however, there are two major challenges: insufficient data and difficulty in discriminative feature extraction. This paper proposed a deep learning model to predict lymph node metastasis in pancreatic cancer using multiphase CT, where a dual-transformation with contrastive learning framework is developed to overcome the challenges in fine-grained prediction with small sample sizes. Specifically, we designed a novel dynamic surface projection method to transform 3D data into 2D images for effectively using the 3D information, preserving the spatial correlation of the original texture information and reducing computational resources. Then, this dynamic surface projection was combined with the spiral transformation to establish a dual-transformation method for enhancing the diversity and complementarity of the dataset. A dual-transformation-based data augmentation method was also developed to produce numerous 2D-transformed images to alleviate the effect of insufficient samples. Finally, the dual-transformation-guided contrastive learning scheme based on intra-space-transformation consistency and inter-class specificity was designed to mine additional supervised information, thereby extracting more discriminative features. Extensive experiments have shown the promising performance of the proposed model for predicting lymph node metastasis in pancreatic cancer. Our dual-transformation with contrastive learning scheme was further confirmed on an external public dataset, representing a potential paradigm for the fine-grained classification of oncological images with small sample sizes. The code will be released at https://github.com/SJTUBME-QianLab/Dual-transformation.  相似文献   

11.
Automatically and accurately annotating tumor in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), which provides a noninvasive in vivo method to evaluate tumor vasculature architectures based on contrast accumulation and washout, is a crucial step in computer-aided breast cancer diagnosis and treatment. However, it remains challenging due to the varying sizes, shapes, appearances and densities of tumors caused by the high heterogeneity of breast cancer, and the high dimensionality and ill-posed artifacts of DCE-MRI. In this paper, we propose a hybrid hemodynamic knowledge-powered and feature reconstruction-guided scheme that integrates pharmacokinetics prior and feature refinement to generate sufficiently adequate features in DCE-MRI for breast cancer segmentation. The pharmacokinetics prior expressed by time intensity curve (TIC) is incorporated into the scheme through objective function called dynamic contrast-enhanced prior (DCP) loss. It contains contrast agent kinetic heterogeneity prior knowledge, which is important to optimize our model parameters. Besides, we design a spatial fusion module (SFM) embedded in the scheme to exploit intra-slices spatial structural correlations, and deploy a spatial–kinetic fusion module (SKFM) to effectively leverage the complementary information extracted from spatial–kinetic space. Furthermore, considering that low spatial resolution often leads to poor image quality in DCE-MRI, we integrate a reconstruction autoencoder into the scheme to refine feature maps in an unsupervised manner. We conduct extensive experiments to validate the proposed method and show that our approach can outperform recent state-of-the-art segmentation methods on breast cancer DCE-MRI dataset. Moreover, to explore the generalization for other segmentation tasks on dynamic imaging, we also extend the proposed method to brain segmentation in DSC-MRI sequence. Our source code will be released on https://github.com/AI-medical-diagnosis-team-of-JNU/DCEDuDoFNet.  相似文献   

12.
Traditionally, segmentation and registration have been solved as two independent problems, even though it is often the case that the solution to one impacts the solution to the other. In this paper, we introduce a geometric, variational framework that uses active contours to simultaneously segment and register features from multiple images. The key observation is that multiple images may be segmented by evolving a single contour as well as the mappings of that contour into each image.  相似文献   

13.
Abdominal multi-organ segmentation in multi-sequence magnetic resonance images (MRI) is of great significance in many clinical scenarios, e.g., MRI-oriented pre-operative treatment planning. Labeling multiple organs on a single MR sequence is a time-consuming and labor-intensive task, let alone manual labeling on multiple MR sequences. Training a model by one sequence and generalizing it to other domains is one way to reduce the burden of manual annotation, but the existence of domain gap often leads to poor generalization performance of such methods. Image translation-based unsupervised domain adaptation (UDA) is a common way to address this domain gap issue. However, existing methods focus less on keeping anatomical consistency and are limited by one-to-one domain adaptation, leading to low efficiency for adapting a model to multiple target domains. This work proposes a unified framework called OMUDA for one-to-multiple unsupervised domain-adaptive segmentation, where disentanglement between content and style is used to efficiently translate a source domain image into multiple target domains. Moreover, generator refactoring and style constraint are conducted in OMUDA for better maintaining cross-modality structural consistency and reducing domain aliasing. The average Dice Similarity Coefficients (DSCs) of OMUDA for multiple sequences and organs on the in-house test set, the AMOS22 dataset and the CHAOS dataset are 85.51%, 82.66% and 91.38%, respectively, which are slightly lower than those of CycleGAN(85.66% and 83.40%) in the first two data sets and slightly higher than CycleGAN(91.36%) in the last dataset. But compared with CycleGAN, OMUDA reduces floating-point calculations by about 87 percent in the training phase and about 30 percent in the inference stage respectively. The quantitative results in both segmentation performance and training efficiency demonstrate the usability of OMUDA in some practical scenes, such as the initial phase of product development.  相似文献   

14.
ObjectivesThe application of deep learning to medical image segmentation has received considerable attention. Nevertheless, when segmenting thyroid ultrasound images, it is difficult to achieve good segmentation results using deep learning methods because of the large number of nonthyroidal regions and insufficient training data.MethodsIn this study, a Super-pixel U-Net, designed by adding a supplementary path to U-Net, was devised to boost the segmentation results of thyroids. The improved network can introduce more information into the network, boosting auxiliary segmentation results. A multi-stage modification is introduced in this method, which includes boundary segmentation, boundary repair, and auxiliary segmentation. To reduce the negative effects of non-thyroid regions in the segmentation, U-Net was utilized to obtain rough boundary outputs. Subsequently, another U-Net is trained to improve and repair the coverage of the boundary outputs. Super-pixel U-Net was applied in the third stage to assist in the segmentation of the thyroid more precisely. Finally, multidimensional indicators were used to compare the segmentation results of the proposed method with those of other comparison experiments.DiscussionThe proposed method achieved an F1 Score of 0.9161 and an IoU of 0.9279. Furthermore, the proposed method also exhibits better performance in terms of shape similarity, with an average convexity of 0.9395. an average ratio of 0.9109, an average compactness of 0.8976, an average eccentricity of 0.9448, and an average rectangularity of 0.9289. The average area estimation indicator was 0.8857.ConclusionThe proposed method exhibited superior performance, proving the improvements of the multi-stage modification and Super-pixel U-Net.  相似文献   

15.
16.
Object segmentation is an important step in the workflow of computational pathology. Deep learning based models generally require large amount of labeled data for precise and reliable prediction. However, collecting labeled data is expensive because it often requires expert knowledge, particularly in medical imaging domain where labels are the result of a time-consuming analysis made by one or more human experts. As nuclei, cells and glands are fundamental objects for downstream analysis in computational pathology/cytology, in this paper we propose NuClick, a CNN-based approach to speed up collecting annotations for these objects requiring minimum interaction from the annotator. We show that for nuclei and cells in histology and cytology images, one click inside each object is enough for NuClick to yield a precise annotation. For multicellular structures such as glands, we propose a novel approach to provide the NuClick with a squiggle as a guiding signal, enabling it to segment the glandular boundaries. These supervisory signals are fed to the network as auxiliary inputs along with RGB channels. With detailed experiments, we show that NuClick is applicable to a wide range of object scales, robust against variations in the user input, adaptable to new domains, and delivers reliable annotations. An instance segmentation model trained on masks generated by NuClick achieved the first rank in LYON19 challenge. As exemplar outputs of our framework, we are releasing two datasets: 1) a dataset of lymphocyte annotations within IHC images, and 2) a dataset of segmented WBCs in blood smear images.  相似文献   

17.
In this paper we present a new algorithm for 3D medical image segmentation. The algorithm is versatile, fast, relatively simple to implement, and semi-automatic. It is based on minimizing a global energy defined from a learned non-parametric estimation of the statistics of the region to be segmented. Implementation details are discussed and source code is freely available as part of the 3D Slicer project. In addition, a new unified set of validation metrics is proposed. Results on artificial and real MRI images show that the algorithm performs well on large brain structures both in terms of accuracy and robustness to noise.  相似文献   

18.
19.
The popularity of fluorescent labelling and mesoscopic optical imaging techniques enable the acquisition of whole mammalian brain vasculature images at capillary resolution. Segmentation of the cerebrovascular network is essential for analyzing the cerebrovascular structure and revealing the pathogenesis of brain diseases. Existing deep learning methods use a single type of annotated labels with the same pixel weight to train the neural network and segment vessels. Due to the variation in the shape, density and brightness of vessels in whole-brain fluorescence images, it is difficult for a neural network trained with a single type of label to segment all vessels accurately. To address this problem, we proposed a deep learning cerebral vasculature segmentation framework based on multi-perspective labels. First, the pixels in the central region of thick vessels and the skeleton region of vessels were extracted separately using morphological operations based on the binary annotated labels to generate two different labels. Then, we designed a three-stage 3D convolutional neural network containing three sub-networks, namely thick-vessel enhancement network, vessel skeleton enhancement network and multi-channel fusion segmentation network. The first two sub-networks were trained by the two labels generated in the previous step, respectively, and pre-segmented the vessels. The third sub-network was responsible for fusing the pre-segmented results to precisely segment the vessels. We validated our method on two mouse cerebral vascular datasets generated by different fluorescence imaging modalities. The results showed that our method outperforms the state-of-the-art methods, and the proposed method can be applied to segment the vasculature on large-scale volumes.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号