首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Synthesized medical images have several important applications. For instance, they can be used as an intermedium in cross-modality image registration or used as augmented training samples to boost the generalization capability of a classifier. In this work, we propose a generic cross-modality synthesis approach with the following targets: 1) synthesizing realistic looking 2D/3D images without needing paired training data, 2) ensuring consistent anatomical structures, which could be changed by geometric distortion in cross-modality synthesis and 3) more importantly, improving volume segmentation by using synthetic data for modalities with limited training samples. We show that these goals can be achieved with an end-to-end 2D/3D convolutional neural network (CNN) composed of mutually-beneficial generators and segmentors for image synthesis and segmentation tasks. The generators are trained with an adversarial loss, a cycle-consistency loss, and also a shape-consistency loss (supervised by segmentors) to reduce the geometric distortion. From the segmentation view, the segmentors are boosted by synthetic data from generators in an online manner. Generators and segmentors prompt each other alternatively in an end-to-end training fashion. We validate our proposed method on three datasets, including cardiovascular CT and magnetic resonance imaging (MRI), abdominal CT and MRI, and mammography X-rays from different data domains, showing both tasks are beneficial to each other and coupling these two tasks results in better performance than solving them exclusively.  相似文献   

2.
Whole abdominal organ segmentation is important in diagnosing abdomen lesions, radiotherapy, and follow-up. However, oncologists’ delineating all abdominal organs from 3D volumes is time-consuming and very expensive. Deep learning-based medical image segmentation has shown the potential to reduce manual delineation efforts, but it still requires a large-scale fine annotated dataset for training, and there is a lack of large-scale datasets covering the whole abdomen region with accurate and detailed annotations for the whole abdominal organ segmentation. In this work, we establish a new large-scale Whole abdominal ORgan Dataset (WORD) for algorithm research and clinical application development. This dataset contains 150 abdominal CT volumes (30495 slices). Each volume has 16 organs with fine pixel-level annotations and scribble-based sparse annotations, which may be the largest dataset with whole abdominal organ annotation. Several state-of-the-art segmentation methods are evaluated on this dataset. And we also invited three experienced oncologists to revise the model predictions to measure the gap between the deep learning method and oncologists. Afterwards, we investigate the inference-efficient learning on the WORD, as the high-resolution image requires large GPU memory and a long inference time in the test stage. We further evaluate the scribble-based annotation-efficient learning on this dataset, as the pixel-wise manual annotation is time-consuming and expensive. The work provided a new benchmark for the abdominal multi-organ segmentation task, and these experiments can serve as the baseline for future research and clinical application development.  相似文献   

3.
Vertebral labelling and segmentation are two fundamental tasks in an automated spine processing pipeline. Reliable and accurate processing of spine images is expected to benefit clinical decision support systems for diagnosis, surgery planning, and population-based analysis of spine and bone health. However, designing automated algorithms for spine processing is challenging predominantly due to considerable variations in anatomy and acquisition protocols and due to a severe shortage of publicly available data. Addressing these limitations, the Large Scale Vertebrae Segmentation Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020, with a call for algorithms tackling the labelling and segmentation of vertebrae. Two datasets containing a total of 374 multi-detector CT scans from 355 patients were prepared and 4505 vertebrae have individually been annotated at voxel level by a human-machine hybrid algorithm (https://osf.io/nqjyw/, https://osf.io/t98fz/). A total of 25 algorithms were benchmarked on these datasets. In this work, we present the results of this evaluation and further investigate the performance variation at the vertebra level, scan level, and different fields of view. We also evaluate the generalisability of the approaches to an implicit domain shift in data by evaluating the top-performing algorithms of one challenge iteration on data from the other iteration. The principal takeaway from VerSe: the performance of an algorithm in labelling and segmenting a spine scan hinges on its ability to correctly identify vertebrae in cases of rare anatomical variations. The VerSe content and code can be accessed at: https://github.com/anjany/verse.  相似文献   

4.
Background To assess the capabilities of 16-channel multislice CT in acquiring almost exclusively arterial-phase images of the pancreas and depicting small pancreatic arteries in coronal reformatted images. Materials and methods In 45 consecutive patients, arterial-phase contrast enhancement was measured in the aorta and its branches, portal venous system, and pancreas. Coronal reformatted images of 1.2- or 1.3-mm slice thickness at 0.8- or 0.9-mm intervals were generated from axial images acquired with 0.5-mm collimation. Two radiologists evaluated the quality of imaging in the arterial phase and the visibility of the pancreatic arteries in coronal reformatted images. Results Mean enhancement in the aorta and its branches was greater than 300 HU, while that in the portal venous system and pancreas was less than 100 HU. The images were judged to be suitable for delineating the pancreatic arteries in all patients. The following arteries were visualized: anterior superior pancreaticoduodenal (39 patients), posterior superior pancreaticoduodenal (41), anterior inferior pancreaticoduodenal (39), posterior inferior pancreaticoduodenal (33), dorsal pancreatic (42), its right branch (34), and transverse pancreatic (37). Conclusion Multislice CT can depict small pancreatic arteries using coronal reformatted images generated from almost exclusively arterial-phase axial images acquired with 0.5-mm collimation.  相似文献   

5.
6.

Purpose  

State of the art computer aided implant planning procedures typically use a surgical template to transfer the digital 3D planning to the operating room. This surgical template can be generated based on an acrylic copy of the patient’s removable prosthesis—the so-called radiographic guide—which is digitized using a CBCT or CT scanner. Since the same accurate fit between the surgical template and the patient as with the radiographic guide and the patient should be ensured, a procedure to accurately digitize this guide is needed.  相似文献   

7.
Left ventricular (LV) segmentation is essential for the early diagnosis of cardiovascular diseases, which has been reported as the leading cause of death all over the world. However, automated LV segmentation from cardiac magnetic resonance images (CMRI) using the traditional convolutional neural networks (CNNs) is still a challenging task due to the limited labeled CMRI data and low tolerances to irregular scales, shapes and deformations of LV. In this paper, we propose an automated LV segmentation method based on adversarial learning by integrating a multi-stage pose estimation network (MSPN) and a co-discrimination network. Different from existing CNNs, we use a MSPN with multi-scale dilated convolution (MDC) modules to enhance the ranges of receptive field for deep feature extraction. To fully utilize both labeled and unlabeled CMRI data, we propose a novel generative adversarial network (GAN) framework for LV segmentation by combining MSPN with co-discrimination networks. Specifically, the labeled CMRI are first used to initialize our segmentation network (MSPN) and co-discrimination network. Our GAN training includes two different kinds of epochs fed with both labeled and unlabeled CMRI data alternatively, which are different from the traditional CNNs only relied on the limited labeled samples to train the segmentation networks. As both ground truth and unlabeled samples are involved in guiding training, our method not only can converge faster but also obtain a better performance in LV segmentation. Our method is evaluated using MICCAI 2009 and 2017 challenge databases. Experimental results show that our method has obtained promising performance in LV segmentation, which also outperforms the state-of-the-art methods in terms of LV segmentation accuracy from the comparison results.  相似文献   

8.
One major limiting factor that prevents the accurate delineation of human organs has been the presence of severe pathology and pathology affecting organ borders. Overcoming these limitations is exactly what we are concerned in this study. We propose an automatic method for accurate and robust pathological organ segmentation from CT images. The method is grounded in the active shape model (ASM) framework. It leverages techniques from low-rank and sparse decomposition (LRSD) theory to robustly recover a subspace from grossly corrupted data. We first present a population-specific LRSD-based shape prior model, called LRSD-SM, to handle non-Gaussian gross errors caused by weak and misleading appearance cues of large lesions, complex shape variations, and poor adaptation to the finer local details in a unified framework. For the shape model initialization, we introduce a method based on patient-specific LRSD-based probabilistic atlas (PA), called LRSD-PA, to deal with large errors in atlas-to-target registration and low likelihood of the target organ. Furthermore, to make our segmentation framework more efficient and robust against local minima, we develop a hierarchical ASM search strategy. Our method is tested on the SLIVER07 database for liver segmentation competition, and ranks 3rd in all the published state-of-the-art automatic methods. Our method is also evaluated on some pathological organs (pathological liver and right lung) from 95 clinical CT scans and its results are compared with the three closely related methods. The applicability of the proposed method to segmentation of the various pathological organs (including some highly severe cases) is demonstrated with good results on both quantitative and qualitative experimentation; our segmentation algorithm can delineate organ boundaries that reach a level of accuracy comparable with those of human raters.  相似文献   

9.
《Medical image analysis》2014,18(1):130-143
This paper presents a novel conditional statistical shape model in which the condition can be relaxed instead of being treated as a hard constraint. The major contribution of this paper is the integration of an error model that estimates the reliability of the observed conditional features and subsequently relaxes the conditional statistical shape model accordingly. A three-step pipeline consisting of (1) conditional feature extraction from a maximum a posteriori estimation, (2) shape prior estimation through the novel level set based conditional statistical shape model with integrated error model and (3) subsequent graph cuts segmentation based on the estimated shape prior is applied to automatic liver segmentation from non-contrast abdominal CT volumes. Comparison with three other state of the art methods shows the superior performance of the proposed algorithm.  相似文献   

10.
11.
This paper relates the post-analysis of the first edition of the HEad and neCK TumOR (HECKTOR) challenge. This challenge was held as a satellite event of the 23rd International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2020, and was the first of its kind focusing on lesion segmentation in combined FDG-PET and CT image modalities. The challenge’s task is the automatic segmentation of the Gross Tumor Volume (GTV) of Head and Neck (H&N) oropharyngeal primary tumors in FDG-PET/CT images. To this end, the participants were given a training set of 201 cases from four different centers and their methods were tested on a held-out set of 53 cases from a fifth center. The methods were ranked according to the Dice Score Coefficient (DSC) averaged across all test cases. An additional inter-observer agreement study was organized to assess the difficulty of the task from a human perspective. 64 teams registered to the challenge, among which 10 provided a paper detailing their approach. The best method obtained an average DSC of 0.7591, showing a large improvement over our proposed baseline method and the inter-observer agreement, associated with DSCs of 0.6610 and 0.61, respectively. The automatic methods proved to successfully leverage the wealth of metabolic and structural properties of combined PET and CT modalities, significantly outperforming human inter-observer agreement level, semi-automatic thresholding based on PET images as well as other single modality-based methods. This promising performance is one step forward towards large-scale radiomics studies in H&N cancer, obviating the need for error-prone and time-consuming manual delineation of GTVs.  相似文献   

12.
In medical image segmentation, supervised machine learning models trained using one image modality (e.g. computed tomography (CT)) are often prone to failure when applied to another image modality (e.g. magnetic resonance imaging (MRI)) even for the same organ. This is due to the significant intensity variations of different image modalities. In this paper, we propose a novel end-to-end deep neural network to achieve multi-modality image segmentation, where image labels of only one modality (source domain) are available for model training and the image labels for the other modality (target domain) are not available. In our method, a multi-resolution locally normalized gradient magnitude approach is firstly applied to images of both domains for minimizing the intensity discrepancy. Subsequently, a dual task encoder-decoder network including image segmentation and reconstruction is utilized to effectively adapt a segmentation network to the unlabeled target domain. Additionally, a shape constraint is imposed by leveraging adversarial learning. Finally, images from the target domain are segmented, as the network learns a consistent latent feature representation with shape awareness from both domains. We implement both 2D and 3D versions of our method, in which we evaluate CT and MRI images for kidney and cardiac tissue segmentation. For kidney, a public CT dataset (KiTS19, MICCAI 2019) and a local MRI dataset were utilized. The cardiac dataset was from the Multi-Modality Whole Heart Segmentation (MMWHS) challenge 2017. Experimental results reveal that our proposed method achieves significantly higher performance with a much lower model complexity in comparison with other state-of-the-art methods. More importantly, our method is also capable of producing superior segmentation results than other methods for images of an unseen target domain without model retraining. The code is available at GitHub (https://github.com/MinaJf/LMISA) to encourage method comparison and further research.  相似文献   

13.

Purpose  

Hypodense liver lesions are commonly detected in CT, so their segmentation and characterization are essential for diagnosis and treatment. Methods for automatic detection and segmentation of liver lesions were developed to support this task.  相似文献   

14.
The segmentation of the kidney tumor and the quantification of its tumor indices (i.e., the center point coordinates, diameter, circumference, and cross-sectional area of the tumor) are important steps in tumor therapy. These quantifies the tumor morphometrical details to monitor disease progression and accurately compare decisions regarding the kidney tumor treatment. However, manual segmentation and quantification is a challenging and time-consuming process in practice and exhibit a high degree of variability within and between operators. In this paper, MB-FSGAN (multi-branch feature sharing generative adversarial network) is proposed for simultaneous segmentation and quantification of kidney tumor on CT. MB-FSGAN consists of multi-scale feature extractor (MSFE), locator of the area of interest (LROI), and feature sharing generative adversarial network (FSGAN). MSFE makes strong semantic information on different scale feature maps, which is particularly effective in detecting small tumor targets. The LROI extracts the region of interest of the tumor, greatly reducing the time complexity of the network. FSGAN correctly segments and quantifies kidney tumors through joint learning and adversarial learning, which effectively exploited the commonalities and differences between the two related tasks. Experiments are performed on CT of 113 kidney tumor patients. For segmentation, MB-FSGAN achieves a pixel accuracy of 95.7%. For the quantification of five tumor indices, the R2 coefficient of tumor circumference is 0.9465. The results show that the network has reliable performance and shows its effectiveness and potential as a clinical tool.  相似文献   

15.
16.
64排CT在头颈CTA中最佳扫描时机探讨   总被引:1,自引:1,他引:0  
目的探讨64排容积CT头颈联合动脉成像最佳对比剂浓度的扫描时机。方法回顾性分析103例头颈动脉联合成像的扫描延迟时间与靶血管中对比剂浓度之间的相互关系,重新设计扫描延迟时间公式,按新公式扫描测量52例头颈CTA靶血管中对比剂浓度。结果使用新设计的延迟时间公式扫描,测得颈总动脉和大脑中动脉对比剂浓度CT平均值为(370±51)HU,而原公式对比剂浓度CT平均值为(440±79)HU,明显高于新公式(u=6.5565,P<0.01)。结论使用新公式计算扫描延迟时间,可将头颈CTA中靶血管对比剂浓度稳定地控制在显示血管软斑块的浓度值上,避免对比剂浓度过高、静脉过度显影等造成静脉伪影。  相似文献   

17.
Training deep segmentation models for medical images often requires a large amount of labeled data. To tackle this issue, semi-supervised segmentation has been employed to produce satisfactory delineation results with affordable labeling cost. However, traditional semi-supervised segmentation methods fail to exploit unpaired multi-modal data, which are widely adopted in today’s clinical routine. In this paper, we address this point by proposing Modality-collAborative Semi-Supervised segmentation (i.e., MASS), which utilizes the modality-independent knowledge learned from unpaired CT and MRI scans. To exploit such knowledge, MASS uses cross-modal consistency to regularize deep segmentation models in aspects of both semantic and anatomical spaces, from which MASS learns intra- and inter-modal correspondences to warp atlases’ labels for making predictions. For better capturing inter-modal correspondence, from a perspective of feature alignment, we propose a contrastive similarity loss to regularize the latent space of both modalities in order to learn generalized and robust modality-independent representations. Compared to semi-supervised and multi-modal segmentation counterparts, the proposed MASS brings nearly 6% improvements under extremely limited supervision.  相似文献   

18.
Automatic segmentation of polyp regions in endoscope images is essential for the early diagnosis and surgical planning of colorectal cancer. Recently, deep learning-based approaches have achieved remarkable progress for polyp segmentation, but they are at the expense of laborious large-scale pixel-wise annotations. In addition, these models treat samples equally, which may cause unstable training due to polyp variability. To address these issues, we propose a novel Meta-Learning Mixup (MLMix) data augmentation method and a Confidence-Aware Resampling (CAR) strategy for polyp segmentation. MLMix adaptively learns the interpolation policy for mixup data in a data-driven way, thereby transferring the original soft mixup label to a reliable hard label and enriching the limited training dataset. Considering the difficulty of polyp image variability in segmentation, the CAR strategy is proposed to progressively select relatively confident images and pixels to facilitate the representation ability of model and ensure the stability of the training procedure. Moreover, the CAR strategy leverages class distribution prior knowledge and assigns different penalty coefficients for polyp and normal classes to rebalance the selected data distribution. The effectiveness of the proposed MLMix data augmentation method and CAR strategy is demonstrated through comprehensive experiments, and our proposed model achieves state-of-the-art performance with 87.450% dice on the EndoScene test set and 86.453% dice on the wireless capsule endoscopy (WCE) polyp dataset.  相似文献   

19.
Prostate segmentation aids in prostate volume estimation, multi-modal image registration, and to create patient specific anatomical models for surgical planning and image guided biopsies. However, manual segmentation is time consuming and suffers from inter-and intra-observer variabilities. Low contrast images of trans rectal ultrasound and presence of imaging artifacts like speckle, micro-calcifications, and shadow regions hinder computer aided automatic or semi-automatic prostate segmentation. In this paper, we propose a prostate segmentation approach based on building multiple mean parametric models derived from principal component analysis of shape and posterior probabilities in a multi-resolution framework. The model parameters are then modified with the prior knowledge of the optimization space to achieve optimal prostate segmentation. In contrast to traditional statistical models of shape and intensity priors, we use posterior probabilities of the prostate region determined from random forest classification to build our appearance model, initialize and propagate our model. Furthermore, multiple mean models derived from spectral clustering of combined shape and appearance parameters are applied in parallel to improve segmentation accuracies. The proposed method achieves mean Dice similarity coefficient value of 0.91 ± 0.09 for 126 images containing 40 images from the apex, 40 images from the base and 46 images from central regions in a leave-one-patient-out validation framework. The mean segmentation time of the procedure is 0.67 ± 0.02 s.  相似文献   

20.
Knowledge about the thickness of the cortical bone is of high interest for fracture risk assessment. Most finite element model solutions overlook this information because of the coarse resolution of the CT images. To circumvent this limitation, a three-steps approach is proposed. 1) Two initial surface meshes approximating the outer and inner cortical surfaces are generated via a shape regression based on morphometric features and statistical shape model parameters. 2) The meshes are then corrected locally using a supervised learning model build from image features extracted from pairs of QCT (0.3-1 mm resolution) and HRpQCT images (82 µm resolution). As the resulting meshes better follow the cortical surfaces, the cortical thickness can be estimated at sub-voxel precision. 3) The meshes are finally regularized by a Gaussian process model featuring a two-kernel model, which seamlessly enables smoothness and shape-awareness priors during regularization. The resulting meshes yield high-quality mesh element properties, suitable for construction of tetrahedral meshes and finite element simulations. This pipeline was applied to 36 pairs of proximal femurs (17 males, 19 females, 76 ± 12 years) scanned under QCT and HRpQCT modalities. On a set of leave-one-out experiments, we quantified accuracy (root mean square error = 0.36 ± 0.29 mm) and robustness (Hausdorff distance = 3.90 ± 1.57 mm) of the outer surface meshes. The error in the estimated cortical thickness (0.05 ± 0.40 mm), and the tetrahedral mesh quality (aspect ratio = 1.4 ± 0.02) are also reported. The proposed pipeline produces finite element meshes with patient-specific bone shape and sub-voxel cortical thickness directly from CT scans. It also ensures that the nodes and elements numbering remains consistent and independent of the morphology, which is a distinct advantage in population studies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号