首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The vessel-like structure in biomedical images, such as within cerebrovascular and nervous pathologies, is an essential biomarker in understanding diseases’ mechanisms and in diagnosing and treating diseases. However, existing vessel-like structure segmentation methods often produce unsatisfactory results due to challenging segmentations for crisp edges. The edge and nonedge voxels of the vessel-like structure in three-dimensional (3D) medical images usually have a highly imbalanced distribution as most voxels are non-edge, making it challenging to find crisp edges. In this work, we propose a generic neural network for the segmentation of the vessel-like structures in different 3D medical imaging modalities. The new edge-reinforced neural network (ER-Net) is based on an encoder–decoder architecture. Moreover, a reverse edge attention module and an edge-reinforced optimization loss are proposed to increase the weight of the voxels on the edge of the given 3D volume to discover and better preserve the spatial edge information. A feature selection module is further introduced to select discriminative features adaptively from an encoder and decoder simultaneously, which aims to increase the weight of edge voxels, thus significantly improving the segmentation performance. The proposed method is thoroughly validated using four publicly accessible datasets, and the experimental results demonstrate that the proposed method generally outperforms other state-of-the-art algorithms for various metrics.  相似文献   

2.
To fully define the target objects of interest in clinical diagnosis, many deep convolution neural networks (CNNs) use multimodal paired registered images as inputs for segmentation tasks. However, these paired images are difficult to obtain in some cases. Furthermore, the CNNs trained on one specific modality may fail on others for images acquired with different imaging protocols and scanners. Therefore, developing a unified model that can segment the target objects from unpaired multiple modalities is significant for many clinical applications. In this work, we propose a 3D unified generative adversarial network, which unifies the any-to-any modality translation and multimodal segmentation in a single network. Since the anatomical structure is preserved during modality translation, the auxiliary translation task is used to extract the modality-invariant features and generate the additional training data implicitly. To fully utilize the segmentation-related features, we add a cross-task skip connection with feature recalibration from the translation decoder to the segmentation decoder. Experiments on abdominal organ segmentation and brain tumor segmentation indicate that our method outperforms the existing unified methods.  相似文献   

3.
Extracting anatomical and functional significant structures renders one of the important tasks for both the theoretical study of the medical image analysis, and the clinical and practical community. In the past, much work has been dedicated only to the algorithmic development. Nevertheless, for clinical end users, a well designed algorithm with an interactive software is necessary for an algorithm to be utilized in their daily work. Furthermore, the software would better be open sourced in order to be used and validated by not only the authors but also the entire community. Therefore, the contribution of the present work is twofolds: first, we propose a new robust statistics based conformal metric and the conformal area driven multiple active contour framework, to simultaneously extract multiple targets from MR and CT medical imagery in 3D. Second, an open source graphically interactive 3D segmentation tool based on the aforementioned contour evolution is implemented and is publicly available for end users on multiple platforms. In using this software for the segmentation task, the process is initiated by the user drawn strokes (seeds) in the target region in the image. Then, the local robust statistics are used to describe the object features, and such features are learned adaptively from the seeds under a non-parametric estimation scheme. Subsequently, several active contours evolve simultaneously with their interactions being motivated by the principles of action and reaction-this not only guarantees mutual exclusiveness among the contours, but also no longer relies upon the assumption that the multiple objects fill the entire image domain, which was tacitly or explicitly assumed in many previous works. In doing so, the contours interact and converge to equilibrium at the desired positions of the desired multiple objects. Furthermore, with the aim of not only validating the algorithm and the software, but also demonstrating how the tool is to be used, we provide the reader reproducible experiments that demonstrate the capability of the proposed segmentation tool on several public available data sets.  相似文献   

4.
Glaucoma is a leading cause of blindness. The measurement of vertical cup-to-disc ratio combined with other clinical features is one of the methods used to screen glaucoma. In this paper, we propose a deep level set method to implement the segmentation of optic cup (OC) and optic disc (OD). We present a multi-scale convolutional neural network as the prediction network to generate level set initial contour and evolution parameters. The initial contour will be further refined based on the evolution parameters. The network is integrated with augmented prior knowledge and supervised by active contour loss, which makes the level set evolution yield more accurate shape and boundary details. The experimental results on the REFUGE dataset show that the IoU of the OC and OD are 93.61% and 96.69%, respectively. To evaluate the robustness of the proposed method, we further test the model on the Drishthi-GS1 dataset. The segmentation results show that the proposed method outperforms the state-of-the-art methods.  相似文献   

5.
Segmentation of organs or lesions from medical images plays an essential role in many clinical applications such as diagnosis and treatment planning. Though Convolutional Neural Networks (CNN) have achieved the state-of-the-art performance for automatic segmentation, they are often limited by the lack of clinically acceptable accuracy and robustness in complex cases. Therefore, interactive segmentation is a practical alternative to these methods. However, traditional interactive segmentation methods require a large number of user interactions, and recently proposed CNN-based interactive segmentation methods are limited by poor performance on previously unseen objects. To solve these problems, we propose a novel deep learning-based interactive segmentation method that not only has high efficiency due to only requiring clicks as user inputs but also generalizes well to a range of previously unseen objects. Specifically, we first encode user-provided interior margin points via our proposed exponentialized geodesic distance that enables a CNN to achieve a good initial segmentation result of both previously seen and unseen objects, then we use a novel information fusion method that combines the initial segmentation with only a few additional user clicks to efficiently obtain a refined segmentation. We validated our proposed framework through extensive experiments on 2D and 3D medical image segmentation tasks with a wide range of previously unseen objects that were not present in the training set. Experimental results showed that our proposed framework 1) achieves accurate results with fewer user interactions and less time compared with state-of-the-art interactive frameworks and 2) generalizes well to previously unseen objects.  相似文献   

6.
Despite recent progress of automatic medical image segmentation techniques, fully automatic results usually fail to meet clinically acceptable accuracy, thus typically require further refinement. To this end, we propose a novel Volumetric Memory Network, dubbed as VMN, to enable segmentation of 3D medical images in an interactive manner. Provided by user hints on an arbitrary slice, a 2D interaction network is firstly employed to produce an initial 2D segmentation for the chosen slice. Then, the VMN propagates the initial segmentation mask bidirectionally to all slices of the entire volume. Subsequent refinement based on additional user guidance on other slices can be incorporated in the same manner. To facilitate smooth human-in-the-loop segmentation, a quality assessment module is introduced to suggest the next slice for interaction based on the segmentation quality of each slice produced in the previous round. Our VMN demonstrates two distinctive features: First, the memory-augmented network design offers our model the ability to quickly encode past segmentation information, which will be retrieved later for the segmentation of other slices; Second, the quality assessment module enables the model to directly estimate the quality of each segmentation prediction, which allows for an active learning paradigm where users preferentially label the lowest-quality slice for multi-round refinement. The proposed network leads to a robust interactive segmentation engine, which can generalize well to various types of user annotations (e.g., scribble, bounding box, extreme clicking). Extensive experiments have been conducted on three public medical image segmentation datasets (i.e., MSD, KiTS19, CVC-ClinicDB), and the results clearly confirm the superiority of our approach in comparison with state-of-the-art segmentation models. The code is made publicly available at https://github.com/0liliulei/Mem3D.  相似文献   

7.
Object segmentation is an important step in the workflow of computational pathology. Deep learning based models generally require large amount of labeled data for precise and reliable prediction. However, collecting labeled data is expensive because it often requires expert knowledge, particularly in medical imaging domain where labels are the result of a time-consuming analysis made by one or more human experts. As nuclei, cells and glands are fundamental objects for downstream analysis in computational pathology/cytology, in this paper we propose NuClick, a CNN-based approach to speed up collecting annotations for these objects requiring minimum interaction from the annotator. We show that for nuclei and cells in histology and cytology images, one click inside each object is enough for NuClick to yield a precise annotation. For multicellular structures such as glands, we propose a novel approach to provide the NuClick with a squiggle as a guiding signal, enabling it to segment the glandular boundaries. These supervisory signals are fed to the network as auxiliary inputs along with RGB channels. With detailed experiments, we show that NuClick is applicable to a wide range of object scales, robust against variations in the user input, adaptable to new domains, and delivers reliable annotations. An instance segmentation model trained on masks generated by NuClick achieved the first rank in LYON19 challenge. As exemplar outputs of our framework, we are releasing two datasets: 1) a dataset of lymphocyte annotations within IHC images, and 2) a dataset of segmented WBCs in blood smear images.  相似文献   

8.
A hybrid framework for 3D medical image segmentation   总被引:5,自引:0,他引:5  
In this paper we propose a novel hybrid 3D segmentation framework which combines Gibbs models, marching cubes and deformable models. In the framework, first we construct a new Gibbs model whose energy function is defined on a high order clique system. The new model includes both region and boundary information during segmentation. Next we improve the original marching cubes method to construct 3D meshes from Gibbs models' output. The 3D mesh serves as the initial geometry of the deformable model. Then we deform the deformable model using external image forces so that the model converges to the object surface. We run the Gibbs model and the deformable model recursively by updating the Gibbs model's parameters using the region and boundary information in the deformable model segmentation result. In our approach, the hybrid combination of region-based methods and boundary-based methods results in improved segmentations of complex structures. The benefit of the methodology is that it produces high quality segmentations of 3D structures using little prior information and minimal user intervention. The modules in this segmentation methodology are developed within the context of the Insight ToolKit (ITK). We present experimental segmentation results of brain tumors and evaluate our method by comparing experimental results with expert manual segmentations. The evaluation results show that the methodology achieves high quality segmentation results with computational efficiency. We also present segmentation results of other clinical objects to illustrate the strength of the methodology as a generic segmentation framework.  相似文献   

9.
Automatic medical image segmentation plays a crucial role in many medical image analysis applications, such as disease diagnosis and prognosis. Despite the extensive progress of existing deep learning based models for medical image segmentation, they focus on extracting accurate features by designing novel network structures and solely utilize fully connected (FC) layer for pixel-level classification. Considering the insufficient capability of FC layer to encode the extracted diverse feature representations, we propose a Hierarchical Segmentation (HieraSeg) Network for medical image segmentation and devise a Hierarchical Fully Connected (HFC) layer. Specifically, it consists of three classifiers and decouples each category into several subcategories by introducing multiple weight vectors to denote the diverse characteristics in each category. A subcategory-level and a category-level learning schemes are then designed to explicitly enforce the discrepant subcategories and automatically capture the most representative characteristics. Hence, the HFC layer can fit the variant characteristics so as to derive an accurate decision boundary. To enhance the robustness of HieraSeg Network with the variability of lesions, we further propose a Dynamic-Weighting HieraSeg (DW-HieraSeg) Network, which introduces an Image-level Weight Net (IWN) and a Pixel-level Weight Net (PWN) to learn data-driven curriculum. Through progressively incorporating informative images and pixels in an easy-to-hard manner, DW-HieraSeg Network is able to eliminate local optimums and accelerate the training process. Additionally, a class balanced loss is proposed to constrain the PWN for preventing the overfitting problem in minority category. Comprehensive experiments on three benchmark datasets, EndoScene, ISIC and Decathlon, show our newly proposed HieraSeg and DW-HieraSeg Networks achieve state-of-the-art performance, which clearly demonstrates the effectiveness of the proposed approaches for medical image segmentation.  相似文献   

10.
The study of temporal series of medical images can be helpful for physicians to perform pertinent diagnoses and to help them in the follow-up of a patient: in some diseases, lesions, tumors or anatomical structures vary over time in size, position, composition, etc., either because of a natural pathological process or under the effect of a drug or a therapy. It is a laborious and subjective task to visually and manually analyze such images. Thus the objective of this work was to automatically detect regions with apparent local volume variation with a vector field operator applied to the local displacement field obtained after a non-rigid registration between two successive temporal images. On the other hand, quantitative measurements, such as the volume variation of lesions or segmentation of evolving lesions, are important. By studying the information of apparent shrinking areas in the direct and reverse displacement fields between images, we are able to segment evolving lesions. Then we propose a method to segment lesions in a whole temporal series of images. In this article we apply this approach to automatically detect and segment multiple sclerosis lesions that evolve in time series of MRI scans of the brain. At this stage, we have only applied the approach to a few experimental cases to demonstrate its potential. A clinical validation remains to be done, which will require important additional work.  相似文献   

11.
To overcome the problem of strong speckle and texture in high-resolution polarimetric synthetic aperture radar (PolSAR) images, a novel level set segmentation method that uses a heterogeneous clutter model is proposed in this article. Because the KummerU distribution has the capability to describe the statistics of PolSAR imagery in both homogeneous and heterogeneous scenes, it is used to replace the traditional Wishart distribution as the statistical model that defines the energy function for PolSAR images in order to improve the accuracy of the segmentation. Moreover, in order to reduce the computation intensity, an enhanced distance-regularized level set evolution (DRLSE-E) term is applied to improve the computational efficiency. The experimental results obtained using synthetic and real PolSAR images show that the method described has an accuracy 10% better than level set methods based on Wishart distributions. It is also shown that adding the DRLSE-E term reduces the computation time by about a third, thus demonstrating the effectiveness of our method.  相似文献   

12.
Statistical shape models (SSMs) have by now been firmly established as a robust tool for segmentation of medical images. While 2D models have been in use since the early 1990s, wide-spread utilization of three-dimensional models appeared only in recent years, primarily made possible by breakthroughs in automatic detection of shape correspondences. In this article, we review the techniques required to create and employ these 3D SSMs. While we concentrate on landmark-based shape representations and thoroughly examine the most popular variants of Active Shape and Active Appearance models, we also describe several alternative approaches to statistical shape modeling. Structured into the topics of shape representation, model construction, shape correspondence, local appearance models and search algorithms, we present an overview of the current state of the art in the field. We conclude with a survey of applications in the medical field and a discussion of future developments.  相似文献   

13.

Purpose

A fully automated segmentation algorithm, progressive surface resolution (PSR), is presented in this paper to determine the closed surface of approximately convex blob-like structures that are common in biomedical imaging. The PSR algorithm was applied to the cortical surface segmentation of 460 vertebral bodies on 46 low-dose chest CT images, which can be potentially used for automated bone mineral density measurement and compression fracture detection.

Methods

The target surface is realized by a closed triangular mesh, which thereby guarantees the enclosure. The surface vertices of the triangular mesh representation are constrained along radial trajectories that are uniformly distributed in 3D angle space. The segmentation is accomplished by determining for each radial trajectory the location of its intersection with the target surface. The surface is first initialized based on an input high confidence boundary image and then resolved progressively based on a dynamic attraction map in an order of decreasing degree of evidence regarding the target surface location.

Results

For the visual evaluation, the algorithm achieved acceptable segmentation for 99.35 % vertebral bodies. Quantitative evaluation was performed on 46 vertebral bodies and achieved overall mean Dice coefficient of 0.939 (with max \(=\) 0.957, min \(=\) 0.906 and standard deviation \(=\) 0.011) using manual annotations as the ground truth.

Conclusions

Both visual and quantitative evaluations demonstrate encouraging performance of the PSR algorithm. This novel surface resolution strategy provides uniform angular resolution for the segmented surface with computation complexity and runtime that are linearly constrained by the total number of vertices of the triangular mesh representation.
  相似文献   

14.
Automatic segmentation of 3D micro-CT coronary vascular images   总被引:1,自引:0,他引:1  
Although there are many algorithms available in the literature aimed at segmentation and model reconstruction of 3D angiographic images, many are focused on characterizing only a part of the vascular network. This study is motivated by the recent emerging prospects of whole-organ simulations in coronary hemodynamics, autoregulation and tissue oxygen delivery for which anatomically accurate vascular meshes of extended scale are highly desirable. The key requirements of a reconstruction technique for this purpose are automation of processing and sub-voxel accuracy. We have designed a vascular reconstruction algorithm which satisfies these two criteria. It combines automatic seeding and tracking of vessels with radius detection based on active contours. The method was first examined through a series of tests on synthetic data, for accuracy in reproduced topology and morphology of the network and was shown to exhibit errors of less than 0.5 voxel for centerline and radius detections, and 3 degrees for initial seed directions. The algorithm was then applied on real-world data of full rat coronary structure acquired using a micro-CT scanner at 20 microm voxel size. For this, a further validation of radius quantification was carried out against a partially rescanned portion of the network at 8 microm voxel size, which estimated less than 10% radius error in vessels larger than 2 voxels in radius.  相似文献   

15.
Direct automatic segmentation of objects in 3D medical imaging, such as magnetic resonance (MR) imaging, is challenging as it often involves accurately identifying multiple individual structures with complex geometries within a large volume under investigation. Most deep learning approaches address these challenges by enhancing their learning capability through a substantial increase in trainable parameters within their models. An increased model complexity will incur high computational costs and large memory requirements unsuitable for real-time implementation on standard clinical workstations, as clinical imaging systems typically have low-end computer hardware with limited memory and CPU resources only. This paper presents a compact convolutional neural network (CAN3D) designed specifically for clinical workstations and allows the segmentation of large 3D Magnetic Resonance (MR) images in real-time. The proposed CAN3D has a shallow memory footprint to reduce the number of model parameters and computer memory required for state-of-the-art performance and maintain data integrity by directly processing large full-size 3D image input volumes with no patches required. The proposed architecture significantly reduces computational costs, especially for inference using the CPU. We also develop a novel loss function with extra shape constraints to improve segmentation accuracy for imbalanced classes in 3D MR images. Compared to state-of-the-art approaches (U-Net3D, improved U-Net3D and V-Net), CAN3D reduced the number of parameters up to two orders of magnitude and achieved much faster inference, up to 5 times when predicting with a standard commercial CPU (instead of GPU). For the open-access OAI-ZIB knee MR dataset, in comparison with manual segmentation, CAN3D achieved Dice coefficient values of (mean = 0.87 ± 0.02 and 0.85 ± 0.04) with mean surface distance errors (mean = 0.36 ± 0.32 mm and 0.29 ± 0.10 mm) for imbalanced classes such as (femoral and tibial) cartilage volumes respectively when training volume-wise under only 12G video memory. Similarly, CAN3D demonstrated high accuracy and efficiency on a pelvis 3D MR imaging dataset for prostate cancer consisting of 211 examinations with expert manual semantic labels (bladder, body, bone, rectum, prostate) now released publicly for scientific use as part of this work.  相似文献   

16.
In the automatic segmentation of echocardiographic images, a priori shape knowledge has been used to compensate for poor features in ultrasound images. This shape knowledge is often learned via an off-line training process, which requires tedious human effort and is highly expertise-dependent. More importantly, a learned shape template can only be used to segment a specific class of images with similar boundary shape. In this paper, we present a multi-scale level set framework for segmentation of endocardial boundaries at each frame in a multiframe echocardiographic image sequence. We point out that the intensity distribution of an ultrasound image at a very coarse scale can be approximately modeled by Gaussian. Then we combine region homogeneity and edge features in a level set approach to extract boundaries automatically at this coarse scale. At finer scale levels, these coarse boundaries are used to both initialize boundary detection and serve as an external constraint to guide contour evolution. This constraint functions similar to a traditional shape prior. Experimental results validate this combinative framework.  相似文献   

17.
18.
The choroid is the vascular layer of the eye that supplies photoreceptors with oxygen. Changes in the choroid are associated with many pathologies including myopia where the choroid progressively thins due to axial elongation. To quantize these changes, there is a need to automatically and accurately segment the choroidal layer from optical coherence tomography (OCT) images. In this paper, we propose a multi-task learning approach to segment the choroid from three-dimensional OCT images. Our proposed architecture aggregates the spatial context from adjacent cross-sectional slices to reconstruct the central slice. Spatial context learned by this reconstruction mechanism is then fused with a U-Net based architecture for segmentation. The proposed approach was evaluated on volumetric OCT scans of 166 myopic eyes acquired with a commercial OCT system, and achieved a cross-validation Intersection over Union (IoU) score of 94.69% which significantly outperformed (p<0.001) the other state-of-the-art methods on the same data set. Choroidal thickness maps generated by our approach also achieved a better structural similarity index (SSIM) of 72.11% with respect to the groundtruth. In particular, our approach performs well for highly challenging eyes with thinner choroids. Compared to other methods, our proposed approach also requires lesser processing time and has lower computational requirements. The results suggest that our proposed approach could potentially be used as a fast and reliable method for automated choroidal segmentation.  相似文献   

19.
20.
提出一种综合应用图像分割与互信息的医学图像自动配准方法.首先采用门限法和数学形态学方法进行预处理,再用k-means方法进行分割,之后采用基于互信息的Powell优化方法配准.将该方法用于磁共振图像(MRI)和正电子发射断层扫描(PET)临床医学图像配准,得到较满意的效果.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号